Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Movie Gen: A Cast of Media Foundation Models (2410.13720v1)

Published 17 Oct 2024 in cs.CV, cs.AI, cs.LG, and eess.IV

Abstract: We present Movie Gen, a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. We also show additional capabilities such as precise instruction-based video editing and generation of personalized videos based on a user's image. Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation. Our largest video generation model is a 30B parameter transformer trained with a maximum context length of 73K video tokens, corresponding to a generated video of 16 seconds at 16 frames-per-second. We show multiple technical innovations and simplifications on the architecture, latent spaces, training objectives and recipes, data curation, evaluation protocols, parallelization techniques, and inference optimizations that allow us to reap the benefits of scaling pre-training data, model size, and training compute for training large scale media generation models. We hope this paper helps the research community to accelerate progress and innovation in media generation models. All videos from this paper are available at https://go.fb.me/MovieGenResearchVideos.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (88)
  1. Adam Polyak (29 papers)
  2. Amit Zohar (6 papers)
  3. Andrew Brown (31 papers)
  4. Andros Tjandra (39 papers)
  5. Animesh Sinha (14 papers)
  6. Ann Lee (29 papers)
  7. Apoorv Vyas (15 papers)
  8. Bowen Shi (82 papers)
  9. Chih-Yao Ma (27 papers)
  10. Ching-Yao Chuang (16 papers)
  11. David Yan (10 papers)
  12. Dhruv Choudhary (16 papers)
  13. Dingkang Wang (24 papers)
  14. Geet Sethi (4 papers)
  15. Guan Pang (19 papers)
  16. Haoyu Ma (45 papers)
  17. Ishan Misra (65 papers)
  18. Ji Hou (25 papers)
  19. Jialiang Wang (36 papers)
  20. Kiran Jagadeesh (1 paper)
Citations (43)

Summary

Overview of "Movie Gen: A Cast of Media Foundation Models"

The paper "Movie Gen: A Cast of Media Foundation Models," introduces a comprehensive suite of foundation models designed to generate high-quality 1080p HD videos with synchronized audio, showcasing capabilities such as text-to-video synthesis, video personalization, and precise video editing. These models represent the state-of-the-art across multiple tasks, effectively setting new benchmarks for media generation.

Key Contributions

  1. Model Architecture and Training:
    • The core of Movie Gen's architecture is a 30B parameter transformer model trained with a maximum context length of 73K video tokens, equivalent to generating 16 seconds of video at 16 FPS.
    • The paper outlines several technical innovations in architecture design, data curation, training protocols, and inference optimizations. These enhancements enable the model to handle the scaling of pre-training data and compute effectively.
  2. Text-to-Video Generation:
    • Movie Gen Video, the largest model in the suite, excels in text-to-image and text-to-video generation, supporting multiple aspect ratios and resolutions. This model is pretrained on a vast dataset comprising both video and images.
    • The training process involves stages for scaling resolution and refining the model with high-quality video datasets to improve the motion and aesthetic quality of outputs.
  3. Video Personalization:
    • The Personalized Movie Gen Video model is capable of generating videos featuring specific individuals based on facial input, preserving identity while adhering to text prompts.
    • The model is trained with a blend of paired and cross-paired data and utilizes a vision encoder to capture identity features from reference images.
  4. Video Editing:
    • Movie Gen Edit demonstrates state-of-the-art performance in video editing by employing innovative training techniques without relying on supervised video editing data.
    • Key to its success is a multi-stage training process that begins with image editing and proceeds to more complex tasks like synthetic multi-frame video editing and backtranslation.
  5. Audio Generation:
    • Movie Gen Audio, a 13B parameter model, generates high-quality cinematic soundtracks with aligned sound effects and music scores to video inputs.
    • It employs a novel combination of flow-matching training objectives and diffusion transformers, alongside audio codecs, to support long-form video audio generation.

Implications and Future Directions

The Movie Gen models have extended the boundaries of generative AI for media and offer promising implications for industries ranging from entertainment to personalized content creation.

  • Scalability and Efficiency: The methodologies demonstrated for scaling models imply that larger architectures can be efficiently managed and trained across extensive datasets, paving the way for further enhancements in media generation quality and diversity.
  • Benchmarking and Open Research: The release of comprehensive benchmarks like Movie Gen Video Bench and Movie Gen Audio Bench aims to standardize evaluation metrics, ensuring robust comparisons in future research.
  • Applications and Ethical Considerations: As these models approach real-world deployment, there are significant considerations for ethical usage, including bias, misuse, and the sociocultural impacts of media content generated by AI.

Overall, this paper marks a substantial advancement in the domain of media generation, providing a cornerstone for continued research and application in generative AI. It underscores the potential and challenges of scaling AI capabilities in video and audio synthesis, offering both technical and conceptual insights into building the next generation of generative models.

Youtube Logo Streamline Icon: https://streamlinehq.com