Context-Enhanced Video Moment Retrieval with Large Language Models (2405.12540v1)
Abstract: Current methods for Video Moment Retrieval (VMR) struggle to align complex situations involving specific environmental details, character descriptions, and action narratives. To tackle this issue, we propose a LLM-guided Moment Retrieval (LMR) approach that employs the extensive knowledge of LLMs to improve video context representation as well as cross-modal alignment, facilitating accurate localization of target moments. Specifically, LMR introduces a context enhancement technique with LLMs to generate crucial target-related context semantics. These semantics are integrated with visual features for producing discriminative video representations. Finally, a language-conditioned transformer is designed to decode free-form language queries, on the fly, using aligned video representations for moment retrieval. Extensive experiments demonstrate that LMR achieves state-of-the-art results, outperforming the nearest competitor by up to 3.28\% and 4.06\% on the challenging QVHighlights and Charades-STA benchmarks, respectively. More importantly, the performance gains are significantly higher for localization of complex queries.
- Weijia Liu (9 papers)
- Bo Miao (8 papers)
- Jiuxin Cao (18 papers)
- Xuelin Zhu (8 papers)
- Bo Liu (484 papers)
- Mehwish Nasim (18 papers)
- Ajmal Mian (136 papers)