Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MAGVIT: Masked Generative Video Transformer (2212.05199v2)

Published 10 Dec 2022 in cs.CV

Abstract: We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle various video synthesis tasks with a single model. We introduce a 3D tokenizer to quantize a video into spatial-temporal visual tokens and propose an embedding method for masked video token modeling to facilitate multi-task learning. We conduct extensive experiments to demonstrate the quality, efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT performs favorably against state-of-the-art approaches and establishes the best-published FVD on three video generation benchmarks, including the challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference time by two orders of magnitude against diffusion models and by 60x against autoregressive models. (iii) A single MAGVIT model supports ten diverse generation tasks and generalizes across videos from different visual domains. The source code and trained models will be released to the public at https://magvit.cs.cmu.edu.

Analysis of the MAGVIT: Masked Generative Video Transformer

The paper "MAGVIT: Masked Generative Video Transformer" introduces an innovative approach to video synthesis utilizing the Masked Generative Video Transformer, referred to as MAGVIT. This method is designed to tackle various video synthesis tasks with a single, unified model, bringing forward substantial discussion points in the realms of video generation quality, efficiency, and adaptability.

Key Contributions

The MAGVIT model is distinctive as it leverages masked token modeling combined with multi-task learning to generate video content. This work progresses beyond traditional single-task approaches by allowing a single model to handle diverse generation tasks, ranging from class-conditional generation to dynamic inpainting of moving objects. The authors highlight several outstanding numerical results and metrics:

  1. Quality Improvements: MAGVIT achieves the best-published Frechet Video Distance (FVD) scores on three major video generation benchmarks, including UCF-101, BAIR Robot Pushing, and Kinetics-600 datasets. Notably, the model has reduced the FVD for class-conditional generation on UCF-101 from 332 to 76, a significant reduction demonstrating higher fidelity in video generation.
  2. Efficiency: In terms of inference time, MAGVIT outperforms existing video generation methods, being two orders of magnitude faster than diffusion models and 60 times faster than autoregressive models. For instance, it can generate a 16-frame 128x128 video clip in 12 computational steps, lasting only 0.25 seconds on a TPU.
  3. Multi-task Capabilities: A single MAGVIT model can efficiently handle ten diverse generation tasks while generalizing effectively across videos from varied visual domains, showcasing the model's robustness and flexibility.

Methodological Advancements

MAGVIT utilizes a framework consisting of two stages: spatial-temporal tokenization and multi-task masked token modeling. The spatial-temporal tokenization is achieved through a finely-tuned 3D vector-quantized (VQ) autoencoder. This method compresses the video into discrete tokens, allowing high fidelity representation in a low-dimensional space.

For learning video tasks, the paper introduces the Conditional Masked Modeling by Interior Tokens (COMMIT), which is crucial for embedding task-specific conditions within the tokenized video making the model adaptable to multiple tasks without requiring significant modifications. This includes video generation tasks such as frame prediction, interpolation, inpainting, outpainting, and more, demonstrating the model’s universal applicability in video synthesis.

Implications and Future Work

The implications of MAGVIT are profound both theoretically and practically. The unification of diverse video tasks under a single model architecture could potentially streamline the development of video synthesis tools, reducing computational costs and increasing system scalability. The efficiency gain paves the way for real-time applications and wider accessibility of high-quality video creation technologies.

Looking ahead, MAGVIT could inspire further research into expanding transformer technologies into other domains of video analysis and synthesis, including those requiring contextual understanding and sequence logic. Given the evolution of AI frameworks, the adaptability of MAGVIT might also contribute significantly toward advancements in virtual reality, augmented reality experiences, and autonomous decision-making systems.

Overall, MAGVIT represents a substantial effort in the consolidation and advancement of video synthesis methodologies, providing a base for future work striving for efficiency and versatility in AI-driven content creation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Lijun Yu (22 papers)
  2. Yong Cheng (58 papers)
  3. Kihyuk Sohn (54 papers)
  4. José Lezama (19 papers)
  5. Han Zhang (338 papers)
  6. Huiwen Chang (28 papers)
  7. Alexander G. Hauptmann (40 papers)
  8. Ming-Hsuan Yang (376 papers)
  9. Yuan Hao (5 papers)
  10. Irfan Essa (91 papers)
  11. Lu Jiang (90 papers)
Citations (163)