Zero-shot Natural Language Video Localization (2110.00428v1)
Abstract: Understanding videos to localize moments with natural language often requires large expensive annotated video regions paired with language queries. To eliminate the annotation costs, we make a first attempt to train a natural language video localization model in zero-shot manner. Inspired by unsupervised image captioning setup, we merely require random text corpora, unlabeled video collections, and an off-the-shelf object detector to train a model. With the unpaired data, we propose to generate pseudo-supervision of candidate temporal regions and corresponding query sentences, and develop a simple NLVL model to train with the pseudo-supervision. Our empirical validations show that the proposed pseudo-supervised method outperforms several baseline approaches and a number of methods using stronger supervision on Charades-STA and ActivityNet-Captions.
- Jinwoo Nam (2 papers)
- Daechul Ahn (4 papers)
- Dongyeop Kang (72 papers)
- Seong Jong Ha (3 papers)
- Jonghyun Choi (50 papers)