DORi: Discovering Object Relationship for Moment Localization of a Natural-Language Query in Video (2010.06260v1)
Abstract: This paper studies the task of temporal moment localization in a long untrimmed video using natural language query. Given a query sentence, the goal is to determine the start and end of the relevant segment within the video. Our key innovation is to learn a video feature embedding through a language-conditioned message-passing algorithm suitable for temporal moment localization which captures the relationships between humans, objects and activities in the video. These relationships are obtained by a spatial sub-graph that contextualizes the scene representation using detected objects and human features conditioned in the language query. Moreover, a temporal sub-graph captures the activities within the video through time. Our method is evaluated on three standard benchmark datasets, and we also introduce YouCookII as a new benchmark for this task. Experiments show our method outperforms state-of-the-art methods on these datasets, confirming the effectiveness of our approach.
- Cristian Rodriguez-Opazo (15 papers)
- Edison Marrese-Taylor (29 papers)
- Basura Fernando (60 papers)
- Hongdong Li (172 papers)
- Stephen Gould (104 papers)