Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MERLOT: Multimodal Neural Script Knowledge Models (2106.02636v3)

Published 4 Jun 2021 in cs.CV, cs.CL, and cs.LG

Abstract: As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future. We introduce MERLOT, a model that learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech -- in an entirely label-free, self-supervised manner. By pretraining with a mix of both frame-level (spatial) and video-level (temporal) objectives, our model not only learns to match images to temporally corresponding words, but also to contextualize what is happening globally over time. As a result, MERLOT exhibits strong out-of-the-box representations of temporal commonsense, and achieves state-of-the-art performance on 12 different video QA datasets when finetuned. It also transfers well to the world of static images, allowing models to reason about the dynamic context behind visual scenes. On Visual Commonsense Reasoning, MERLOT answers questions correctly with 80.6% accuracy, outperforming state-of-the-art models of similar size by over 3%, even those that make heavy use of auxiliary supervised data (like object bounding boxes). Ablation analyses demonstrate the complementary importance of: 1) training on videos versus static images; 2) scaling the magnitude and diversity of the pretraining video corpus; and 3) using diverse objectives that encourage full-stack multimodal reasoning, from the recognition to cognition level.

Citations (352)

Summary

  • The paper introduces MERLOT, which leverages self-supervised learning on a diverse set of 6M YouTube videos to teach machines temporal script knowledge.
  • It employs a dual encoder architecture that combines grid-based visual encoding with Transformer-based joint vision-language encoding for spatial and temporal alignment.
  • Empirical results demonstrate MERLOT’s state-of-the-art performance on benchmarks like VCR and strong zero-shot transfer in visual storytelling tasks.

Analysis of MERLOT: Multimodal Neural Script Knowledge Models

MERLOT advances the field of multimodal machine learning by addressing a longstanding challenge: teaching machines temporal script knowledge through self-supervised learning. Unlike conventional models that predominantly learn from static images and captions, MERLOT derives its strength from pretraining on a diverse corpus of six million YouTube videos with automatically transcribed speech. This innovative approach not only highlights the potential of integrating video and textual information but also demonstrates state-of-the-art performance across multiple video and still image reasoning tasks.

MERLOT's architecture is built upon two foundational components: a grid-based visual encoder and a Transformer-based joint vision-language encoder. The visual encoder is leveraged to process video frames independently, while the joint encoder merges visual and textual modalities to generate cohesive representations. This dual setup facilitates learning from both spatially and temporally aligned data, thus enabling the model to comprehend dynamic events depicted in videos.

The paper introduces several key contributions, including an improved dataset, YT-Temporal-180M, which encompasses videos from diverse domains and topics, and a novel pretraining objective. MERLOT optimizes three core tasks: a contrastive frame-transcript matching task, attention-masked LLMing, and temporal reordering. Notably, contrastive learning from video segments aligns closely coupled spoken and visual data, while attention masking selectively targets visually grounded words, enhancing the model's reliance on visual context for predicting masked tokens.

In empirical evaluations, MERLOT achieves new benchmarks on a variety of established datasets. For instance, on Visual Commonsense Reasoning (VCR), MERLOT surpasses previous models with a 65.1% score in the Q→AR metric. Furthermore, the model demonstrates strong zero-shot transfer capabilities, exemplified by its effectiveness in unscrambling narratives in visual storytelling tasks without fine-tuning. These results endorse its proficiency in reasoning about complex temporal and commonsense phenomena across diverse contexts.

MERLOT's ablation studies reveal insights into its configuration: the superiority of using the full YT-Temporal-180M dataset over smaller, domain-specific datasets, the effectiveness of attention-masked LLMing, and the ongoing utility of diverse pretraining with no apparent saturation of performance after numerous epochs. The findings also suggest that larger context windows enhance learning, albeit with the risk of increased dependency on linguistic shortcuts, which attention masking successfully mitigates.

Despite its successes, MERLOT is not devoid of limitations or ethical considerations. Concerns are raised regarding the biases inherent in the YouTube dataset, the environmental cost of extensive pretraining, and the potential misuse in surveillance. Additionally, the model's reliance on public online video data highlights issues of privacy and consent, despite implementation measures to mitigate these risks.

Future research could explore finer-grained temporal models, multilingual advancements, and counter-bias measures to address the challenges outlined. The research supports the growing consensus that unfettered multimodal learning from richly diverse sources offers considerable promise for the development of robust AI systems, capable of complex, commonsensical reasoning akin to human understanding.