Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Egocentric Video-Language Pretraining @ Ego4D Challenge 2022 (2207.01622v2)

Published 4 Jul 2022 in cs.CV

Abstract: In this report, we propose a video-language pretraining (VLP) based solution \cite{kevin2022egovlp} for four Ego4D challenge tasks, including Natural Language Query (NLQ), Moment Query (MQ), Object State Change Classification (OSCC), and PNR Localization (PNR). Especially, we exploit the recently released Ego4D dataset \cite{grauman2021ego4d} to pioneer Egocentric VLP from pretraining dataset, pretraining objective, and development set. Based on the above three designs, we develop a pretrained video-LLM that is able to transfer its egocentric video-text representation or video-only representation to several video downstream tasks. Our Egocentric VLP achieves 10.46R@1&IoU @0.3 on NLQ, 10.33 mAP on MQ, 74% Acc on OSCC, 0.67 sec error on PNR. The code is available at https://github.com/showlab/EgoVLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Kevin Qinghong Lin (28 papers)
  2. Alex Jinpeng Wang (20 papers)
  3. Mattia Soldan (11 papers)
  4. Michael Wray (29 papers)
  5. Rui Yan (250 papers)
  6. Eric Zhongcong Xu (6 papers)
  7. Difei Gao (32 papers)
  8. Rongcheng Tu (9 papers)
  9. Wenzhe Zhao (11 papers)
  10. Weijie Kong (11 papers)
  11. Chengfei Cai (10 papers)
  12. Hongfa Wang (29 papers)
  13. Dima Damen (83 papers)
  14. Bernard Ghanem (256 papers)
  15. Wei Liu (1135 papers)
  16. Mike Zheng Shou (165 papers)
Citations (7)