Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA (2406.09396v3)

Published 13 Jun 2024 in cs.CV

Abstract: Long-form videos that span across wide temporal intervals are highly information redundant and contain multiple distinct events or entities that are often loosely related. Therefore, when performing long-form video question answering (LVQA), all information necessary to generate a correct response can often be contained within a small subset of frames. Recent literature explore the use of LLMs in LVQA benchmarks, achieving exceptional performance, while relying on vision LLMs (VLMs) to convert all visual content within videos into natural language. Such VLMs often independently caption a large number of frames uniformly sampled from long videos, which is not efficient and can mostly be redundant. Questioning these decision choices, we explore optimal strategies for key-frame selection that can significantly reduce these redundancies, namely Hierarchical Keyframe Selector. Our proposed framework, LVNet, achieves state-of-the-art performance at a comparable caption scale across three benchmark LVQA datasets: EgoSchema, IntentQA, NExT-QA. The code can be found at https://github.com/jongwoopark7978/LVNet

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jongwoo Park (8 papers)
  2. Kanchana Ranasinghe (21 papers)
  3. Kumara Kahatapitiya (20 papers)
  4. Wonjeong Ryoo (3 papers)
  5. Donghyun Kim (129 papers)
  6. Michael S. Ryoo (75 papers)
Citations (6)