Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fewer Tokens and Fewer Videos: Extending Video Understanding Abilities in Large Vision-Language Models (2406.08024v1)

Published 12 Jun 2024 in cs.CV and cs.AI

Abstract: Amidst the advancements in image-based Large Vision-LLMs (image-LVLM), the transition to video-based models (video-LVLM) is hindered by the limited availability of quality video data. This paper addresses the challenge by leveraging the visual commonalities between images and videos to efficiently evolve image-LVLMs into video-LVLMs. We present a cost-effective video-LVLM that enhances model architecture, introduces innovative training strategies, and identifies the most effective types of video instruction data. Our innovative weighted token sampler significantly compresses the visual token numbers of each video frame, effectively cutting computational expenses. We also find that judiciously using just 10% of the video data, compared to prior video-LVLMs, yields impressive results during various training phases. Moreover, we delve into the influence of video instruction data in limited-resource settings, highlighting the significance of incorporating video training data that emphasizes temporal understanding to enhance model performance. The resulting Fewer Tokens and Fewer Videos LVLM (FTFV-LVLM) exhibits exceptional performance across video and image benchmarks, validating our model's design and training approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shimin Chen (15 papers)
  2. Yitian Yuan (16 papers)
  3. Shaoxiang Chen (24 papers)
  4. Zequn Jie (60 papers)
  5. Lin Ma (206 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets