Egocentric Video-Language Pretraining @ Ego4D Challenge 2022 (2207.01622v2)
Abstract: In this report, we propose a video-language pretraining (VLP) based solution \cite{kevin2022egovlp} for four Ego4D challenge tasks, including Natural Language Query (NLQ), Moment Query (MQ), Object State Change Classification (OSCC), and PNR Localization (PNR). Especially, we exploit the recently released Ego4D dataset \cite{grauman2021ego4d} to pioneer Egocentric VLP from pretraining dataset, pretraining objective, and development set. Based on the above three designs, we develop a pretrained video-LLM that is able to transfer its egocentric video-text representation or video-only representation to several video downstream tasks. Our Egocentric VLP achieves 10.46R@1&IoU @0.3 on NLQ, 10.33 mAP on MQ, 74% Acc on OSCC, 0.67 sec error on PNR. The code is available at https://github.com/showlab/EgoVLP.
- Kevin Qinghong Lin (28 papers)
- Alex Jinpeng Wang (20 papers)
- Mattia Soldan (11 papers)
- Michael Wray (29 papers)
- Rui Yan (250 papers)
- Eric Zhongcong Xu (6 papers)
- Difei Gao (32 papers)
- Rongcheng Tu (9 papers)
- Wenzhe Zhao (11 papers)
- Weijie Kong (11 papers)
- Chengfei Cai (10 papers)
- Hongfa Wang (29 papers)
- Dima Damen (83 papers)
- Bernard Ghanem (255 papers)
- Wei Liu (1135 papers)
- Mike Zheng Shou (165 papers)