Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text (2403.14773v1)

Published 21 Mar 2024 in cs.CV, cs.AI, cs.CL, cs.LG, cs.MM, and eess.IV
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text

Abstract: Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended to the case of long video synthesis. To overcome these limitations, we introduce StreamingT2V, an autoregressive approach for long video generation of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:(i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the previous chunk via an attentional mechanism, leading to consistent chunk transitions, (ii) a long-term memory block called appearance preservation module, which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that enables to apply a video enhancer autoregressively for infinitely long videos without inconsistencies between chunks. Experiments show that StreamingT2V generates high motion amount. In contrast, all competing image-to-video methods are prone to video stagnation when applied naively in an autoregressive manner. Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator that outperforms competitors with consistency and motion. Our code will be available at: https://github.com/Picsart-AI-Research/StreamingT2V

StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text

The paper "StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text" introduces a novel approach to overcoming limitations in the text-to-video (T2V) niche, particularly concerning the length and consistency of generated videos. Leveraging diffusion models, the authors propose a methodology for generating videos that span hundreds to thousands of frames without suffering temporal inconsistencies or quality degradation.

Introduction and Background

Recent advances in diffusion models have significantly improved the generation of images from textual descriptions, extending naturally to video generation guided by text prompts. However, the transition from image synthesis to video generation introduces new complexities, primarily due to the temporal dimension. Most existing models, including Video Diffusion Models (VDM), Text-To-Video-Zero (T2V0), and others, generate brief sequences while requiring extensive computational resources. When these models naively attempt to generate longer videos, they frequently encounter stagnation and inconsistency issues.

Core Contributions

To address these significant limitations, the authors introduce StreamingT2V, an autoregressive text-to-video approach. This method involves several innovative components:

  1. Conditional Attention Module (CAM): This module ensures smooth content transitions between video chunks by conditioning the current generation on features extracted from the previous segment. Unlike simple concatenation or other conditioning mechanisms, CAM leverages a temporal attention mechanism to maintain consistency and high motion dynamics.
  2. Appearance Preservation Module (APM): Preventing the model from forgetting object details and scene characteristics over long sequences, APM extracts high-level features from an anchor frame at the beginning and conditions subsequent frames on this information.
  3. Randomized Blending for Video Enhancement: To enhance video quality over long sequences, the authors adapt a high-resolution text-to-video model in an autoregressive manner, using randomized blending to achieve seamless transitions between overlapping video chunks.

Methodology

The proposed StreamingT2V method comprises three stages:

  1. Initialization Stage: The first chunk of video frames is synthesized using a pre-trained text-to-video model (e.g., Modelscope).
  2. Streaming T2V Stage: New frames are generated autoregressively using CAM, which conditions each new chunk on the last few frames of the previous chunk, ensuring temporal consistency and motion.
  3. Streaming Refinement Stage: Finally, the entire long video is enhanced using a high-resolution text-to-video model, employing the randomized blending technique to ensure smooth transitions and high visual quality.

This approach ensures that the generated videos do not suffer from the typical stagnation or abrupt scene changes seen in previous methodologies.

Experimental Results

Experiments demonstrate that StreamingT2V effectively generates videos with significant frame counts (up to 1200 frames) while maintaining high image quality and temporal consistency. Quantitative metrics show superior performance across various measures, including motion aware warp error (MAWE), scene cuts (SCuts), CLIP text-image similarity, and aesthetic score, compared with state-of-the-art models such as SparseCtrl, DynamiCrafter-XL, I2VGen-XL, SEINE, SVD, and FreeNoise.

Ablation studies further validate the importance of each component, particularly highlighting the effectiveness of CAM in preventing temporal inconsistencies and APM in maintaining object and scene fidelity over long sequences. The randomized blending approach proves to be crucial for achieving seamless transitions in the enhanced video stages.

Implications and Future Directions

The proposed StreamingT2V method has significant implications for practical applications of video generation, such as advertising, storytelling, and content creation. Its ability to extend the length of generated videos while preserving quality and consistency addresses a critical gap in current methodologies. The modular nature of the approach ensures that future improvements in base models can be seamlessly integrated, suggesting a trajectory of ever-improving video generation capabilities.

Theoretically, this work pushes the boundaries of autoregressive video generation, especially in terms of effectively managing long-range dependencies and temporal consistency. Future developments could explore more sophisticated attention mechanisms, additional conditioning cues, and refined architectural adjustments to further enhance video fidelity and length.

Conclusion

The "StreamingT2V" paper presents a robust solution to the longstanding challenges of generating extended, temporally consistent videos from textual descriptions. Through the innovative use of CAM, APM, and randomized blending, the authors demonstrate a significant leap in the capabilities of text-to-video models, opening new avenues for practical and theoretical advancements in the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Roberto Henschel (8 papers)
  2. Levon Khachatryan (2 papers)
  3. Daniil Hayrapetyan (1 paper)
  4. Hayk Poghosyan (3 papers)
  5. Vahram Tadevosyan (3 papers)
  6. Zhangyang Wang (374 papers)
  7. Shant Navasardyan (10 papers)
  8. Humphrey Shi (97 papers)
Citations (44)
Youtube Logo Streamline Icon: https://streamlinehq.com