SI
Guest Editors:
  • Shengfeng He, Singapore Management University, Singapore
  • Lin Gao, University of Chinese Academy of Sciences, China
  • Hongbo Fu, City University of Hong Kong, Hong Kong SAR, China
  • Varun Jampani, Google Research, USA
  • Lu Jiang, Google Research, USA; Carnegie Mellon University, USA
  • Ming-Hsuan Yang, University of California at Merced, USA; Google, USA

The field of computer vision has witnessed significant advancements in recent years, largely due to the remarkable progress made in generative models. Large-scale generative models have emerged as a powerful tool for content creation and manipulation, revolutionizing the way visual content is generated and modified. These models have demonstrated their ability to generate high-quality images, videos, and 3D models, while also enabling diverse content manipulation tasks such as style transfer, image inpainting, and object manipulation. The advent of large-scale generative models has opened up new possibilities for content creation and manipulation, offering unprecedented control over visual elements and fostering innovation across a wide range of applications.
Large-scale generative models leverage sophisticated architectures, such as generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models, and flow-based models, to synthesize visually appealing and realistic content. They tackle the challenges posed by generating high-resolution and diverse images and videos, incorporating temporal consistency and coherence for video generation, and enabling the creation of immersive content for applications like virtual reality and entertainment. Moreover, these models facilitate content manipulation by enabling style transfer, where the visual characteristics of one image or video are transferred to another, image and video inpainting for filling missing or corrupted regions seamlessly, and object manipulation for removing, adding, or rearranging objects within visual scenes. Additionally, large-scale generative models have extended their capabilities to 3D content manipulation, encompassing tasks such as shape generation, texture synthesis, and deformation. The development and exploration of large-scale generative models for content creation and manipulation present exciting opportunities to transform the way we generate and modify visual content.

Scope
This special issue invites original research articles and reviews focusing on large-scale generative models for content creation and manipulation. The topics of interest include, but are not limited to:
1. Large-scale generative models for 2D/3D image and video synthesis:
- Advanced architectures and training techniques for generating high-resolution and diverse images and videos.
-Temporal consistency and coherence in video generation.
-Novel applications in content synthesis, virtual reality, and entertainment.
 
2. Content manipulation using large-scale generative models:
-Style transfer for artistic rendering, domain adaptation, and visual effects.
-Image and video inpainting for filling missing or corrupted regions.
-Object manipulation, removal, and rearrangement in images and videos.
-3D content manipulation, including shape generation, texture synthesis, and deformation.
 
3. Training and optimization of large-scale generative models:
-Scalable training strategies for handling large datasets and complex generative models.
-Regularization and regularization techniques for improving training stability and convergence.
-Model compression and acceleration for efficient deployment and real-time applications.
 
4. Evaluation and benchmarking of large-scale generative models:
-Metrics and protocols for assessing the quality and diversity of generated content.
-Comparative studies and analysis of different large-scale generative models.
-Evaluation on challenging datasets and benchmarks for content creation and manipulation tasks.
 
5. Applications and impact of large-scale generative models:
-Content creation and enhancement in areas such as digital art, design, and advertising.
-Content manipulation for data augmentation, data synthesis, and virtual reality applications.
-Ethical considerations and challenges related to large-scale generative models.

Important Dates:
  • Manuscript submission deadline: February 28, 2024 March 31, 2024
  • First review notification: May 31, 2024
  • Revised manuscript submission: July 31, 2024
  • Final review notification: September 30, 2024
  • Final manuscript submission: October 31, 2024
  • Publication date: December 31, 2024

Submission Guidelines
Please submit via IJCV Editorial Manager: www.editorialmanager.com/visi
Choose SI: Large Scale Generative Models from the Article Type dropdown.

Submitted papers should present original, unpublished work, relevant to one of the topics of the Special Issue.  All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least two independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process. Manuscripts will be subject to peer reviewing process and must conform to the author guidelines available on the IJCV website at: https://www.springer.com/11263.

If you have any question related to this IJCV SI (https://www.springer.com/journal/11263/updates/25868054), please contact shengfenghe@smu.edu.sg.
The reviewing process will start once we received your submission. Submit your article now!