Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Just Ask: Learning to Answer Questions from Millions of Narrated Videos (2012.00451v3)

Published 1 Dec 2020 in cs.CV, cs.CL, and cs.LG

Abstract: Recent methods for visual question answering rely on large-scale annotated datasets. Manual annotation of questions and answers for videos, however, is tedious, expensive and prevents scalability. In this work, we propose to avoid manual annotation and generate a large-scale training dataset for video question answering making use of automatic cross-modal supervision. We leverage a question generation transformer trained on text data and use it to generate question-answer pairs from transcribed video narrations. Given narrated videos, we then automatically generate the HowToVQA69M dataset with 69M video-question-answer triplets. To handle the open vocabulary of diverse answers in this dataset, we propose a training procedure based on a contrastive loss between a video-question multi-modal transformer and an answer transformer. We introduce the zero-shot VideoQA task and show excellent results, in particular for rare answers. Furthermore, we demonstrate our method to significantly outperform the state of the art on MSRVTT-QA, MSVD-QA, ActivityNet-QA and How2QA. Finally, for a detailed evaluation we introduce iVQA, a new VideoQA dataset with reduced language biases and high-quality redundant manual annotations. Our code, datasets and trained models are available at https://antoyang.github.io/just-ask.html.

Overview of "Just Ask: Learning to Answer Questions from Millions of Narrated Videos"

The paper, "Just Ask: Learning to Answer Questions from Millions of Narrated Videos," presents a novel approach to Visual Question Answering (VideoQA) by leveraging automatically generated large-scale datasets. The authors circumvent the challenges of manually annotating video datasets, which are both costly and unscalable, by developing an automatic method for VideoQA dataset generation using transcribed narrations from videos and advanced LLMs.

Key Contributions

  1. HowToVQA69M Dataset: The authors introduce HowToVQA69M, a dataset consisting of 69 million video-question-answer triplets. This dataset is generated using narrated videos without the need for manual annotations, leveraging automatic cross-modal supervision.
  2. Question-Answer Generation: The process for creating this dataset includes utilizing a transformer-based model for question generation. The model generates question-answer pairs from text corpora, which are then aligned with the narrated videos. This approach supports open-vocabulary answers, addressing a notable limitation in current models that rely on fixed answer vocabularies.
  3. Model Architecture: The authors propose a multi-modal transformer-based architecture that learns a joint embedding of video, question, and answer. The model is trained using a contrastive loss function, which enhances its ability to handle the open-ended nature of VideoQA.
  4. Zero-shot VideoQA Task: A new task is introduced to evaluate the model’s ability to generalize without any visual supervision during training. The results demonstrate the model's capability to perform well in zero-shot scenarios, especially for infrequent answers.
  5. Benchmarking Against Existing Datasets: The novel approach significantly outperforms state-of-the-art methods across several popular VideoQA datasets, including MSRVTT-QA, MSVD-QA, ActivityNet-QA, and How2QA. The introduction of iVQA, a new dataset with high-quality manual annotations and reduced language biases, further substantiates the robustness of the proposed method.

Implications and Future Directions

Practically, this research expands the horizons of scalable VideoQA systems by effectively utilizing vast amounts of available narrated video content on the internet. The methodological advancements in cross-modal learning and question generation make it possible to derive rich, actionable insights without exhaustive manual annotations. Theoretically, the work underlines the importance of task-specific data generation and illustrates how diverse, large-scale datasets can propel model generalization across varied VideoQA scenarios.

This paper heralds a future where AI models could autonomously learn and adapt by continuously ingesting and understanding complex video content, which is particularly significant as the amount of multimedia data continues to #grow. Future research can explore variations in dataset domains and complexities, further enriching the capabilities of VideoQA systems. There is also potential to refine LLMs for even better context understanding and to develop more sophisticated evaluation metrics that capture the nuances of open-ended answer spaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Antoine Yang (12 papers)
  2. Antoine Miech (23 papers)
  3. Josef Sivic (78 papers)
  4. Ivan Laptev (99 papers)
  5. Cordelia Schmid (206 papers)
Citations (264)
Youtube Logo Streamline Icon: https://streamlinehq.com