MovieLLM: Enhancing Long Video Understanding with AI-Generated Movies (2403.01422v2)
Abstract: Development of multimodal models has marked a significant step forward in how machines understand videos. These models have shown promise in analyzing short video clips. However, when it comes to longer formats like movies, they often fall short. The main hurdles are the lack of high-quality, diverse video data and the intensive work required to collect or annotate such data. In face of these challenges, we propose MovieLLM, a novel framework designed to synthesize consistent and high-quality video data for instruction tuning. The pipeline is carefully designed to control the style of videos by improving textual inversion technique with powerful text generation capability of GPT-4. As the first framework to do such thing, our approach stands out for its flexibility and scalability, empowering users to create customized movies with only one description. This makes it a superior alternative to traditional data collection methods. Our extensive experiments validate that the data produced by MovieLLM significantly improves the performance of multimodal models in understanding complex video narratives, overcoming the limitations of existing datasets regarding scarcity and bias.
- Glance and focus: Memory prompting for multi-event video question answering. arXiv preprint arXiv:2401.01529.
- Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 961–970.
- Fiber: Fill-in-the-blanks as a challenging video understanding evaluation framework. In Association for Computational Linguistics, pages 2925–2940.
- David Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In Association for Computational Linguistics, pages 190–200.
- Slowfast networks for video recognition.
- An image is worth one word: Personalizing text-to-image generation using textual inversion.
- Env-qa: A video question answering benchmark for comprehensive understanding of dynamic environments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1675–1685.
- Chartllama: A multimodal llm for chart understanding and generation. arXiv preprint arXiv:2311.16483.
- Movienet: A holistic dataset for movie understanding. In European Conference on Computer Vision, pages 709–727. Springer.
- Video-lavit: Unified video-language pre-training with decoupled visual-motional tokenization. arXiv preprint arXiv:2402.03161.
- Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931–1941.
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597.
- Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355.
- Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043.
- Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122.
- Visual instruction tuning. arXiv preprint arXiv:2304.08485.
- Cones 2: Customizable image synthesis with multiple subjects. arXiv preprint arXiv:2305.19327.
- Vista-llama: Reliable video narrator via equal distance to visual tokens. arXiv preprint arXiv:2312.08870.
- Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424.
- No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695–4708.
- OpenAI. 2023. Gpt-4 technical report.
- Video understanding with large language models: A survey. arXiv preprint arXiv:2312.17432.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
- Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5288–5296.
- Appagent: Multimodal agents as smartphone users. arXiv preprint arXiv:2312.13771.
- Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858.
- Video question answering: Datasets, algorithms and challenges. In Empirical Methods in Natural Language Processing, pages 6439–6455.
- Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
- Zhende Song (1 paper)
- Chenchen Wang (8 papers)
- Jiamu Sheng (5 papers)
- Chi Zhang (568 papers)
- Gang Yu (114 papers)
- Jiayuan Fan (29 papers)
- Tao Chen (398 papers)