Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Recipe for Scaling up Text-to-Video Generation with Text-free Videos (2312.15770v1)

Published 25 Dec 2023 in cs.CV and cs.AI
A Recipe for Scaling up Text-to-Video Generation with Text-free Videos

Abstract: Diffusion-based text-to-video generation has witnessed impressive progress in the past year yet still falls behind text-to-image generation. One of the key reasons is the limited scale of publicly available data (e.g., 10M video-text pairs in WebVid10M vs. 5B image-text pairs in LAION), considering the high cost of video captioning. Instead, it could be far easier to collect unlabeled clips from video platforms like YouTube. Motivated by this, we come up with a novel text-to-video generation framework, termed TF-T2V, which can directly learn with text-free videos. The rationale behind is to separate the process of text decoding from that of temporal modeling. To this end, we employ a content branch and a motion branch, which are jointly optimized with weights shared. Following such a pipeline, we study the effect of doubling the scale of training set (i.e., video-only WebVid10M) with some randomly collected text-free videos and are encouraged to observe the performance improvement (FID from 9.67 to 8.19 and FVD from 484 to 441), demonstrating the scalability of our approach. We also find that our model could enjoy sustainable performance gain (FID from 8.19 to 7.64 and FVD from 441 to 366) after reintroducing some text labels for training. Finally, we validate the effectiveness and generalizability of our ideology on both native text-to-video generation and compositional video synthesis paradigms. Code and models will be publicly available at https://tf-t2v.github.io/.

Understanding Text-to-Video Generation

Introduction

Creating videos from textual descriptions is a significant challenge in artificial intelligence, particularly due to the complexity of videos, which involve both visual content and temporal dynamics. Advances in generative models have made significant strides in this domain, yet text-to-video generation still largely lags behind image generation. A crucial factor limiting progress is the scarcity of large-scale text-annotated video datasets, as video captioning is resource-intensive. Consequently, existing datasets pale in comparison to the vast amount of image-text pairs available, such as the billions contained in LAION's databases.

A Novel Approach

Researchers have proposed a framework known as TF-T2V (Text-Free Text-to-Video), which leverages the abundance of unlabelled videos readily available from sources like YouTube, thus bypassing the need for text-video pairs entirely. By decoupling the textual decoding process from temporal modeling, two branches are trained: one for content generation and the other for motion dynamics, sharing weights for optimization. The content branch uses image-text data to learn spatial appearance generation while the motion branch learns video synthesis from the text-free videos, capturing intricate motion patterns.

Scalability and Performance

The paper showcases that expanding the training set with text-free videos can yield improvements in performance, as demonstrated by lower FID (Frechet Inception Distance) and FVD (Frechet Video Distance) scores, which are metrics for evaluating video quality and temporal coherence. Additionally, reintroducing text labels can further enhance performance, suggesting a sustainable model that scales up effectively with more data. The framework's versatility is proven across different tasks, such as native text-to-video generation and compositional video synthesis, which includes additional controls like depth, sketch, and motion vectors.

Implementation Insights

The paper details the underlying structure of the TF-T2V model, built upon available baselines and showcasing its applicability in high-definition video generation. Through quantitative measures, user studies, and ablation tests, the effectiveness of the proposed methods is confirmed. The temporal coherence loss in particular bolsters the production of smoothly transitioning videos.

Limitations and Future Directions

As with any research, there are avenues for further exploration. One limitation cited is the unexplored potential of scaling with text-free video datasets significantly larger than the ones used. Another is the potential for processing longer-form videos, which remains a challenge within the current scope of this paper. Additionally, more refinement is needed for the model to precisely interpret and render videos that require understanding complex action descriptions embedded in text prompts.

Conclusion

This development in text-to-video generation illustrates a significant step forward in the field's pursuit to create realistic and temporally coherent videos from text. The research indicates that scalable and versatile video generation is feasible without relying on extensive text annotations, opening up new possibilities for content creation using advanced AI techniques. With the code and models slated for public release, the work promises to contribute significantly to future advances in video generation technology.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (83)
  1. Latent-shift: Latent diffusion with temporal shift for efficient text-to-video generation. arXiv preprint arXiv:2304.08477, 2023.
  2. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, pages 1728–1738, 2021.
  3. Conditional GAN with discriminative filter generation for text-to-video synthesis. In IJCAI, page 2, 2019.
  4. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, pages 22563–22575, 2023.
  5. Cerspense. Zeroscope: Diffusion-based text-to-video synthesis. https://huggingface.co/cerspense/zeroscope_v2_576w, 2023.
  6. Pix2video: Video editing using image diffusion. In ICCV, pages 23206–23217, 2023.
  7. Stablevideo: Text-driven consistency-aware diffusion video editing. In ICCV, pages 23040–23050, 2023.
  8. Motion-conditioned diffusion model for controllable video synthesis. arXiv preprint arXiv:2304.14404, 2023a.
  9. Control-a-video: Controllable text-to-video generation with diffusion models. arXiv preprint arXiv:2305.13840, 2023b.
  10. Flownet: Learning optical flow with convolutional networks. In ICCV, pages 2758–2766, 2015.
  11. Taming Transformers for high-resolution image synthesis. In CVPR, pages 12873–12883, 2021.
  12. Structure and content-guided video synthesis with diffusion models. In ICCV, pages 7346–7356, 2023.
  13. Scenescape: Text-driven consistent scene generation. arXiv preprint arXiv:2302.01133, 2023.
  14. Preserve your own correlation: A noise prior for video diffusion models. In ICCV, pages 22930–22941, 2023.
  15. Tokenflow: Consistent diffusion features for consistent video editing. arXiv preprint arXiv:2307.10373, 2023.
  16. Generative adversarial nets. NeurIPS, 27, 2014.
  17. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023.
  18. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. arXiv preprint arXiv:2211.13221, 2022.
  19. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
  20. Denoising diffusion probabilistic models. NeurIPS, 33:6840–6851, 2020.
  21. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022.
  22. Cogvideo: Large-scale pretraining for text-to-video generation via Transformers. In ICLR, 2023.
  23. Free-bloom: Zero-shot text-to-video generator with llm director and ldm animator. arXiv preprint arXiv:2309.14494, 2023a.
  24. Composer: Creative and controllable image synthesis with composable conditions. ICML, 2023b.
  25. Towards understanding action recognition. In ICCV, pages 3192–3199, 2013.
  26. Scaling up GANs for text-to-image synthesis. In CVPR, pages 10124–10134, 2023.
  27. Imagic: Text-based real image editing with diffusion models. In CVPR, pages 6007–6017, 2023.
  28. Text2video-zero: Text-to-image diffusion models are zero-shot video generators. arXiv preprint arXiv:2303.13439, 2023.
  29. Gd-vdm: Generated depth for better diffusion-based video generation. arXiv preprint arXiv:2306.11173, 2023.
  30. Action-aware embedding enhancement for image-text retrieval. In AAAI, pages 1323–1331, 2022.
  31. Video-p2p: Video editing with cross-attention control. arXiv preprint arXiv:2303.04761, 2023.
  32. Vdt: An empirical study on video diffusion with Transformers. arXiv preprint arXiv:2305.13311, 2023.
  33. Videofusion: Decomposed diffusion models for high-quality video generation. In CVPR, pages 10209–10218, 2023.
  34. Dreamtalk: When expressive talking head generation meets diffusion probabilistic models. arXiv preprint arXiv:2312.09767, 2023.
  35. Dreamix: Video diffusion models are general video editors. arXiv preprint arXiv:2302.01329, 2023.
  36. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023.
  37. Conditional image-to-video generation with latent flow diffusion models. In CVPR, pages 18444–18455, 2023.
  38. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In ICML, pages 16784–16804. PMLR, 2022.
  39. Fatezero: Fusing attentions for zero-shot text-based video editing. In ICCV, 2023.
  40. Hierarchical spatio-temporal decoupling for text-to-video generation. arXiv preprint arXiv:2312.04483, 2023.
  41. Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763. PMLR, 2021.
  42. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3, 2022.
  43. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684–10695, 2022.
  44. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, pages 22500–22510, 2023.
  45. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479–36494, 2022.
  46. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022.
  47. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
  48. Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 35:25278–25294, 2022.
  49. Closed-form factorization of latent semantics in GANs. In CVPR, pages 1532–1540, 2021.
  50. Make-a-video: Text-to-video generation without text-video data. ICLR, 2023.
  51. StyleGAN-v: A continuous video generator with the price, image quality and perks of StyleGAN2. In CVPR, pages 3626–3636, 2022.
  52. Denoising diffusion implicit models. In ICLR, 2021.
  53. MocoGAN: Decomposing motion and content for video generation. In CVPR, pages 1526–1535, 2018.
  54. Modelscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023a.
  55. Tdn: Temporal difference networks for efficient action recognition. In CVPR, pages 1895–1904, 2021.
  56. Zero-shot video editing using off-the-shelf image diffusion models. arXiv preprint arXiv:2303.17599, 2023b.
  57. Videofactory: Swap attention in spatiotemporal diffusions for text-to-video generation. arXiv preprint arXiv:2305.10874, 2023c.
  58. Videocomposer: Compositional video synthesis with motion controllability. NeurIPS, 2023d.
  59. Molo: Motion-augmented long-short contrastive learning for few-shot action recognition. In CVPR, pages 18011–18021, 2023e.
  60. Videolcm: Video latent consistency model. arXiv preprint arXiv:2312.09109, 2023f.
  61. G3an: Disentangling appearance and motion for video generation. In CVPR, pages 5264–5273, 2020.
  62. Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103, 2023g.
  63. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023h.
  64. Styleinv: A temporal style modulated inversion network for unconditional video generation. In ICCV, pages 22851–22861, 2023i.
  65. Dreamvideo: Composing your dream videos with customized subject and motion. arXiv preprint arXiv:2312.04433, 2023.
  66. Nüwa: Visual synthesis pre-training for neural visual world creation. In ECCV, pages 720–736. Springer, 2022.
  67. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In ICCV, pages 7623–7633, 2023.
  68. Make-your-video: Customized video generation using textual and structural guidance. arXiv preprint arXiv:2306.00943, 2023a.
  69. Simda: Simple diffusion adapter for efficient video generation. arXiv preprint arXiv:2308.09710, 2023b.
  70. Msr-vtt: A large video description dataset for bridging video and language. In CVPR, pages 5288–5296, 2016.
  71. Advancing high-resolution video-language representation with large-scale video transcriptions. In CVPR, 2022.
  72. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv preprint arXiv:2308.08089, 2023.
  73. Magvit: Masked generative video Transformer. In CVPR, pages 10459–10469, 2023a.
  74. Video probabilistic diffusion models in projected latent space. In CVPR, pages 18456–18466, 2023b.
  75. Instructvideo: Instructing video diffusion models with human feedback. arXiv preprint arXiv:2312.12490, 2023.
  76. Show-1: Marrying pixel and latent diffusion models for text-to-video generation. arXiv preprint arXiv:2309.15818, 2023a.
  77. Adding conditional control to text-to-image diffusion models. In ICCV, pages 3836–3847, 2023b.
  78. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint arXiv:2311.04145, 2023c.
  79. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023d.
  80. Slow feature analysis for human action recognition. TPAMI, 34(3):436–450, 2012.
  81. Controlvideo: Adding conditional control for one shot text-to-video editing. arXiv preprint arXiv:2305.17098, 2023.
  82. Magicvideo: Efficient video generation with latent diffusion models. arXiv preprint arXiv:2211.11018, 2022.
  83. Synthesizing videos from images for image-to-video adaptation. In ACMMM, pages 8294–8303, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiang Wang (279 papers)
  2. Shiwei Zhang (179 papers)
  3. Hangjie Yuan (36 papers)
  4. Zhiwu Qing (29 papers)
  5. Biao Gong (32 papers)
  6. Yingya Zhang (43 papers)
  7. Yujun Shen (111 papers)
  8. Changxin Gao (76 papers)
  9. Nong Sang (86 papers)
Citations (13)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub