Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations (2408.12590v2)

Published 22 Aug 2024 in cs.CV and cs.AI

Abstract: We present xGen-VideoSyn-1, a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions. Building on recent advancements, such as OpenAI's Sora, we explore the latent diffusion model (LDM) architecture and introduce a video variational autoencoder (VidVAE). VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens and the computational demands associated with generating long-sequence videos. To further address the computational costs, we propose a divide-and-merge strategy that maintains temporal consistency across video segments. Our Diffusion Transformer (DiT) model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios. We have devised a data processing pipeline from the very beginning and collected over 13M high-quality video-text pairs. The pipeline includes multiple steps such as clipping, text detection, motion estimation, aesthetics scoring, and dense captioning based on our in-house video-LLM model. Training the VidVAE and DiT models required approximately 40 and 642 H100 days, respectively. Our model supports over 14-second 720p video generation in an end-to-end way and demonstrates competitive performance against state-of-the-art T2V models.

xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations

The paper "xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations" presents a text-to-video (T2V) generation model that leverages compressed representations to create realistic video sequences from textual descriptions. This model, named xGen-VideoSyn-1, introduces a novel combination of video variational autoencoding (VidVAE) and diffusion transformer (DiT) architectures, addressing several critical challenges in the domain of T2V synthesis, such as computational efficiency and video quality.

Key Contributions

  1. Video-Specific VAE: The paper proposes a VidVAE that achieves spatial and temporal compression, significantly reducing the computational costs associated with video generation. Unlike traditional models that compress each frame independently, VidVAE incorporates temporal compression, yielding a more efficient encoding of video sequences. This approach allows for the generation of long sequences without the prohibitive computational expense.
  2. Divide-and-Merge Strategy: To handle out-of-memory (OOM) issues during the encoding of long videos, the authors introduce a divide-and-merge strategy. This method segments videos into overlapping chunks for individual processing, which helps maintain temporal consistency and reduces the memory load on computational resources.
  3. Diffusion Transformer Architecture: xGen-VideoSyn-1 employs a video diffusion transformer (VDiT) that extends traditional latent diffusion models to videos. The VDiT uses spatial and temporal self-attention layers to capture video dynamics and improves the generalization capability across different resolutions and aspect ratios. The model utilizes a combination of rotary positional embeddings (ROPE) and sinusoidal encodings to effectively represent spatial and temporal information.
  4. Extensive Dataset and Data Processing Pipeline: The paper details an automated and scalable data processing pipeline designed to collect over 13 million high-quality video-text pairs. This pipeline includes deduplication, optical character recognition (OCR), motion detection, aesthetic scoring, and dense captioning. The collected dataset plays a crucial role in training the VidVAE and DiT models, enabling them to achieve competitive performance.

Quantitative and Qualitative Results

Training the VidVAE and DiT models required significant computational resources, amounting to approximately 40 and 642 H100 GPU days, respectively. The xGen-VideoSyn-1 model supports up to 14-second 720p video generation end-to-end. The model's performance was evaluated on multiple metrics, with VBench providing a comprehensive analysis.

The model showcases strong performance across various dimensions:

  • Consistency: The generated videos maintain high subject and background consistency over time.
  • Temporal Performance: The model achieves smooth motion and minimal temporal flickering, important for realistic video generation.
  • Aesthetic Quality: xGen-VideoSyn-1 produces videos with high aesthetic value and image quality, outperforming several baselines.

A user paper further validates the model's superior performance, indicating more than a 15% preference over competing methods such as OpenSora V1.1.

Theoretical and Practical Implications

The research introduces a scalable approach to T2V generation, highlighting the potential of compressed representations in reducing computational demands without compromising video quality. The divide-and-merge strategy and VidVAE's temporal compression represent significant advancements in handling long video sequences efficiently.

From a practical perspective, this model offers robust capabilities for various applications, including content creation, animation, and virtual reality. Its ability to generate diverse styles and high-quality content based on textual descriptions opens new avenues for automated media production.

Future Developments

The paper hints at several future directions:

  • Model Scaling: Enhancing the model's architecture and increasing its parameters could further improve video quality and style fidelity.
  • Expanded Dataset: Incorporating more diverse and extensive datasets could enrich the model's understanding and generation capabilities.
  • Real-time Processing: Optimizing the model for real-time video generation could make it suitable for interactive applications.

In conclusion, "xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations" makes a substantial contribution to the field of T2V generation. By integrating advanced video-specific VAE and diffusion transformer architectures, coupled with a comprehensive data collection pipeline, the model achieves a compelling balance between computational efficiency and video quality, paving the way for future innovations in AI-driven video synthesis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Can Qin (37 papers)
  2. Congying Xia (32 papers)
  3. Krithika Ramakrishnan (1 paper)
  4. Michael Ryoo (12 papers)
  5. Lifu Tu (19 papers)
  6. Yihao Feng (35 papers)
  7. Manli Shu (23 papers)
  8. Honglu Zhou (21 papers)
  9. Anas Awadalla (12 papers)
  10. Jun Wang (991 papers)
  11. Senthil Purushwalkam (23 papers)
  12. Le Xue (23 papers)
  13. Yingbo Zhou (81 papers)
  14. Huan Wang (211 papers)
  15. Silvio Savarese (200 papers)
  16. Juan Carlos Niebles (95 papers)
  17. Zeyuan Chen (41 papers)
  18. Ran Xu (89 papers)
  19. Caiming Xiong (337 papers)
Citations (1)