Re-thinking Temporal Search for Long-Form Video Understanding (2504.02259v2)
Abstract: Efficiently understanding long-form videos remains a significant challenge in computer vision. In this work, we revisit temporal search paradigms for long-form video understanding and address a fundamental issue pertaining to all state-of-the-art (SOTA) long-context vision-LLMs (VLMs). Our contributions are twofold: First, we frame temporal search as a Long Video Haystack problem: finding a minimal set of relevant frames (e.g., one to five) from tens of thousands based on specific queries. Upon this formulation, we introduce LV-Haystack, the first dataset with 480 hours of videos, 15,092 human-annotated instances for both training and evaluation aiming to improve temporal search quality and efficiency. Results on LV-Haystack highlight a significant research gap in temporal search capabilities, with current SOTA search methods only achieving 2.1% temporal F1 score on the Longvideobench subset. Next, inspired by visual search in images, we propose a lightweight temporal search framework, T* that reframes costly temporal search as spatial search. T* leverages powerful visual localization techniques commonly used in images and introduces an adaptive zooming-in mechanism that operates across both temporal and spatial dimensions. Extensive experiments show that integrating T* with existing methods significantly improves SOTA long-form video understanding. Under an inference budget of 32 frames, T* improves GPT-4o's performance from 50.5% to 53.1% and LLaVA-OneVision-OV-72B's performance from 56.5% to 62.4% on the Longvideobench XL subset. Our code, benchmark, and models are provided in the Supplementary material.
- Jinhui Ye (8 papers)
- Zihan Wang (181 papers)
- Haosen Sun (3 papers)
- Keshigeyan Chandrasegaran (13 papers)
- Zane Durante (12 papers)
- Cristobal Eyzaguirre (5 papers)
- Yonatan Bisk (91 papers)
- Juan Carlos Niebles (95 papers)
- Ehsan Adeli (97 papers)
- Li Fei-Fei (199 papers)
- Jiajun Wu (249 papers)
- Manling Li (47 papers)