Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HawkEye: Training Video-Text LLMs for Grounding Text in Videos (2403.10228v1)

Published 15 Mar 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Video-text LLMs (video-text LLMs) have shown remarkable performance in answering questions and holding conversations on simple videos. However, they perform almost the same as random on grounding text queries in long and complicated videos, having little ability to understand and reason about temporal information, which is the most fundamental difference between videos and images. In this paper, we propose HawkEye, one of the first video-text LLMs that can perform temporal video grounding in a fully text-to-text manner. To collect training data that is applicable for temporal video grounding, we construct InternVid-G, a large-scale video-text corpus with segment-level captions and negative spans, with which we introduce two new time-aware training objectives to video-text LLMs. We also propose a coarse-grained method of representing segments in videos, which is more robust and easier for LLMs to learn and follow than other alternatives. Extensive experiments show that HawkEye is better at temporal video grounding and comparable on other video-text tasks with existing video-text LLMs, which verifies its superior video-text multi-modal understanding abilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yueqian Wang (11 papers)
  2. Xiaojun Meng (23 papers)
  3. Jianxin Liang (7 papers)
  4. Yuxuan Wang (239 papers)
  5. Qun Liu (230 papers)
  6. Dongyan Zhao (144 papers)
Citations (15)