Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MM-Ego: Towards Building Egocentric Multimodal LLMs (2410.07177v1)

Published 9 Oct 2024 in cs.CV, cs.AI, and cs.LG

Abstract: This research aims to comprehensively explore building a multimodal foundation model for egocentric video understanding. To achieve this goal, we work on three fronts. First, as there is a lack of QA data for egocentric video understanding, we develop a data engine that efficiently generates 7M high-quality QA samples for egocentric videos ranging from 30 seconds to one hour long, based on human-annotated data. This is currently the largest egocentric QA dataset. Second, we contribute a challenging egocentric QA benchmark with 629 videos and 7,026 questions to evaluate the models' ability in recognizing and memorizing visual details across videos of varying lengths. We introduce a new de-biasing evaluation method to help mitigate the unavoidable language bias present in the models being evaluated. Third, we propose a specialized multimodal architecture featuring a novel "Memory Pointer Prompting" mechanism. This design includes a global glimpse step to gain an overarching understanding of the entire video and identify key visual information, followed by a fallback step that utilizes the key visual information to generate responses. This enables the model to more effectively comprehend extended video content. With the data, benchmark, and model, we successfully build MM-Ego, an egocentric multimodal LLM that shows powerful performance on egocentric video understanding.

Summary of "MM-Ego: Towards Building Egocentric Multimodal LLMs"

The focus of this paper is on the development of MM-Ego, a multimodal LLM (MLLM) designed for understanding egocentric video content. The research targets the unique challenges posed by egocentric videos, which are recorded from a first-person perspective and often involve dynamic scenes of human activities. The authors address gaps in existing data, model design, and benchmarking for this specialized domain by introducing new methodologies and resources.

Contributions

  1. Data Engine for Egocentric QA Generation: The paper describes a novel data engine capable of automatically generating a large-scale dataset of 7 million question-answer (QA) samples derived from human-annotated egocentric video narrations. This represents the largest dataset of its kind, providing essential training material for developing models with strong egocentric video understanding capabilities.
  2. Benchmark Creation: The authors introduce EgoMemoria, a challenging benchmark designed to evaluate the performance of models in understanding and remembering visual details in egocentric videos. The benchmark includes over 7,000 questions across 629 videos, ranging up to an hour long.
  3. Model Architecture: They propose a specialized multimodal architecture incorporating a "Memory Pointer Prompting" mechanism. This allows the model to efficiently identify and process key visual details from extended video content. The method involves a two-step process:
    • Global Glimpse: Extracts overarching insights from video frames.
    • Fallback: Focuses on detailed visual elements relevant to specific questions.

Numerical Results

The MM-Ego model demonstrates substantial improvements in egocentric video understanding. On the EgoMemoria benchmark, it achieves a Mean Debiased Accuracy (MDA) of 61.27, significantly outperforming baseline models like LLaVA-OV. This highlights the model's ability to accurately comprehend and reason through lengthy egocentric footage.

Implications

The introduction of MM-Ego and its associated training resources represents a step forward in multimodal AI, particularly for applications involving augmented and virtual reality, wearable devices, and autonomous systems. The research underscores the importance of specialized data and model architectures in addressing the nuanced challenges posed by egocentric perspectives.

Future Directions

The paper suggests potential enhancements in data diversity and model capacity to extend MM-Ego's effectiveness in even longer or continuous egocentric video streams. Future work may involve integrating more sophisticated attention mechanisms or expanding the range of tested real-world scenarios.

In conclusion, this work lays a robust foundation for advancing egocentric video understanding in AI, offering vital tools and methodologies for researchers in the field. The MM-Ego model, along with its novel data synthesis approach and rigorous evaluation benchmark, is posited as a cornerstone for ongoing developments in multimodal LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Hanrong Ye (17 papers)
  2. Haotian Zhang (107 papers)
  3. Erik Daxberger (11 papers)
  4. Lin Chen (384 papers)
  5. Zongyu Lin (15 papers)
  6. Yanghao Li (43 papers)
  7. Bowen Zhang (161 papers)
  8. Haoxuan You (33 papers)
  9. Dan Xu (120 papers)
  10. Zhe Gan (135 papers)
  11. Jiasen Lu (32 papers)
  12. Yinfei Yang (73 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com