Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EgoVideo: Exploring Egocentric Foundation Model and Downstream Adaptation (2406.18070v4)

Published 26 Jun 2024 in cs.CV

Abstract: In this report, we present our solutions to the EgoVis Challenges in CVPR 2024, including five tracks in the Ego4D challenge and three tracks in the EPIC-Kitchens challenge. Building upon the video-language two-tower model and leveraging our meticulously organized egocentric video data, we introduce a novel foundation model called EgoVideo. This model is specifically designed to cater to the unique characteristics of egocentric videos and provides strong support for our competition submissions. In the Ego4D challenges, we tackle various tasks including Natural Language Queries, Step Grounding, Moment Queries, Short-term Object Interaction Anticipation, and Long-term Action Anticipation. In addition, we also participate in the EPIC-Kitchens challenge, where we engage in the Action Recognition, Multiple Instance Retrieval, and Domain Adaptation for Action Recognition tracks. By adapting EgoVideo to these diverse tasks, we showcase its versatility and effectiveness in different egocentric video analysis scenarios, demonstrating the powerful representation ability of EgoVideo as an egocentric foundation model. Our codebase and pretrained models are publicly available at https://github.com/OpenGVLab/EgoVideo.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Baoqi Pei (10 papers)
  2. Guo Chen (107 papers)
  3. Jilan Xu (32 papers)
  4. YuPing He (11 papers)
  5. Yicheng Liu (25 papers)
  6. Kanghua Pan (1 paper)
  7. Yifei Huang (71 papers)
  8. Yali Wang (78 papers)
  9. Tong Lu (85 papers)
  10. Limin Wang (221 papers)
  11. Yu Qiao (563 papers)
Citations (6)