Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DirecT2V: Large Language Models are Frame-Level Directors for Zero-Shot Text-to-Video Generation (2305.14330v3)

Published 23 May 2023 in cs.CV, cs.AI, and cs.CL

Abstract: In the paradigm of AI-generated content (AIGC), there has been increasing attention to transferring knowledge from pre-trained text-to-image (T2I) models to text-to-video (T2V) generation. Despite their effectiveness, these frameworks face challenges in maintaining consistent narratives and handling shifts in scene composition or object placement from a single abstract user prompt. Exploring the ability of LLMs to generate time-dependent, frame-by-frame prompts, this paper introduces a new framework, dubbed DirecT2V. DirecT2V leverages instruction-tuned LLMs as directors, enabling the inclusion of time-varying content and facilitating consistent video generation. To maintain temporal consistency and prevent mapping the value to a different object, we equip a diffusion model with a novel value mapping method and dual-softmax filtering, which do not require any additional training. The experimental results validate the effectiveness of our framework in producing visually coherent and storyful videos from abstract user prompts, successfully addressing the challenges of zero-shot video generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Susung Hong (12 papers)
  2. Junyoung Seo (14 papers)
  3. Sunghwan Hong (16 papers)
  4. Heeseong Shin (6 papers)
  5. Seungryong Kim (103 papers)
Citations (26)