EVOQUER: Enhancing Temporal Grounding with Video-Pivoted BackQuery Generation (2109.04600v1)
Abstract: Temporal grounding aims to predict a time interval of a video clip corresponding to a natural language query input. In this work, we present EVOQUER, a temporal grounding framework incorporating an existing text-to-video grounding model and a video-assisted query generation network. Given a query and an untrimmed video, the temporal grounding model predicts the target interval, and the predicted video clip is fed into a video translation task by generating a simplified version of the input query. EVOQUER forms closed-loop learning by incorporating loss functions from both temporal grounding and query generation serving as feedback. Our experiments on two widely used datasets, Charades-STA and ActivityNet, show that EVOQUER achieves promising improvements by 1.05 and 1.31 at [email protected]. We also discuss how the query generation task could facilitate error analysis by explaining temporal grounding model behavior.
- Yanjun Gao (25 papers)
- Lulu Liu (8 papers)
- Jason Wang (66 papers)
- Xin Chen (457 papers)
- Huayan Wang (12 papers)
- Rui Zhang (1138 papers)