DreamFrame: Enhancing Video Understanding via Automatically Generated QA and Style-Consistent Keyframes
Abstract: Recent large vision-LLMs (LVLMs) for video understanding are primarily fine-tuned with various videos scraped from online platforms. Existing datasets, such as ActivityNet, require considerable human labor for structuring and annotation before effectively utilized for tuning LVLMs. While current LVLMs are primarily trained on existing datasets in broad, general-purpose settings, adapting them to specific downstream scenarios remains challenging, as collecting and annotating task-specific videos is highly labor-intensive and time-consuming. To address this issue, we propose a three-stage framework named DreamFrame for automatically generating style-consistent keyframes and corresponding question-answer (QA) pairs to support LVLM instruction tuning. DreamFrame generates datasets in a movie-like manner. First, we utilize an LLM to generate structured movie plots including movie prior information (like overview and style), frame descriptions and plot-related QA pairs, with a story expansion strategy to mitigate context length limitations.Then, to ensure visual consistency across generated frames, we design a Style Immobilization Process which maintains consistent style through an embedding learning strategy. Finally, frame descriptions and style embeddings are integrated to produce coherent keyframes. Using DreamFrame, we construct a dataset comprising approximately 1k stylized keyframe-like videos and 100k diverse QA pairs. Extensive fine-tuned experiments on various LVLM architectures demonstrate the effectiveness of the proposed dataset. Furthermore, based on the proposed dataset, we fine-tune a new LVLM named DreamFrame-7B, which significantly surpasses the previous similar-sized LVLMs across different benchmarks.
- Glance and focus: Memory prompting for multi-event video question answering. arXiv preprint arXiv:2401.01529.
- Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 961–970.
- Fiber: Fill-in-the-blanks as a challenging video understanding evaluation framework. In Association for Computational Linguistics, pages 2925–2940.
- David Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Association for Computational Linguistics, pages 190–200.
- Slowfast networks for video recognition.
- An image is worth one word: Personalizing text-to-image generation using textual inversion.
- Env-qa: A video question answering benchmark for comprehensive understanding of dynamic environments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1675–1685.
- Chartllama: A multimodal llm for chart understanding and generation. arXiv preprint arXiv:2311.16483.
- Movienet: A holistic dataset for movie understanding. In European Conference on Computer Vision, pages 709–727. Springer.
- Video-lavit: Unified video-language pre-training with decoupled visual-motional tokenization. arXiv preprint arXiv:2402.03161.
- Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931–1941.
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597.
- Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355.
- Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043.
- Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122.
- Visual instruction tuning. arXiv preprint arXiv:2304.08485.
- Cones 2: Customizable image synthesis with multiple subjects. arXiv preprint arXiv:2305.19327.
- Vista-llama: Reliable video narrator via equal distance to visual tokens. arXiv preprint arXiv:2312.08870.
- Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424.
- No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695–4708.
- OpenAI. 2023. Gpt-4 technical report.
- Video understanding with large language models: A survey. arXiv preprint arXiv:2312.17432.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
- Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5288–5296.
- Appagent: Multimodal agents as smartphone users. arXiv preprint arXiv:2312.13771.
- Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858.
- Video question answering: Datasets, algorithms and challenges. In Empirical Methods in Natural Language Processing, pages 6439–6455.
- Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.