Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text (2210.02928v2)

Published 6 Oct 2022 in cs.CL, cs.AI, and cs.CV
MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text

Abstract: While LLMs store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs. Recently, retrieval-augmented models, such as REALM, RAG, and RETRO, have incorporated world knowledge into language generation by leveraging an external non-parametric index and have demonstrated impressive performance with constrained model sizes. However, these methods are restricted to retrieving only textual knowledge, neglecting the ubiquitous amount of knowledge in other modalities like images -- much of which contains information not covered by any text. To address this limitation, we propose the first Multimodal Retrieval-Augmented Transformer (MuRAG), which accesses an external non-parametric multimodal memory to augment language generation. MuRAG is pre-trained with a mixture of large-scale image-text and text-only corpora using a joint contrastive and generative loss. We perform experiments on two different datasets that require retrieving and reasoning over both images and text to answer a given query: WebQA, and MultimodalQA. Our results show that MuRAG achieves state-of-the-art accuracy, outperforming existing models by 10-20\% absolute on both datasets and under both distractor and full-wiki settings.

Introduction

The recently developed Multimodal Retrieval-Augmented Transformer (MuRAG) is a significant advance in open question answering (QA) models involving both visual and textual data. This novel approach is designed to address the limitations of existing LLMs that solely rely on textual knowledge, often at the expense of overlooking the vast amount of information contained in visual data. MuRAG integrates a non-parametric multimodal memory, paving the way for more sophisticated information retrieval across different data forms.

Pre-training and Model Architecture

MuRAG is constructed upon a backbone consisting of pre-trained T5 and ViT models, enabling the handling of both text and images. Its pre-training process involves the use of diverse datasets, including LAION, Conceptual-Caption, VQA, and PAQ, to teach the model to augment language generation with external knowledge. Besides, it employs a dual-loss structure, including contrastive and generative aspects, which concurrently optimizes retrieval accuracy and generative performance.

Datasets and Fine-tuning

Evaluated on both WebQA and MultimodalQA datasets, MuRAG has demonstrated its proficiency. The model's ability to generate accurate, visually-grounded outputs was particularly notable in scenarios requiring both text and image comprehension for knowledge retrieval. A two-stage fine-tuning approach was leveraged, first utilizing an in-batch memory followed by training on a statically encoded global memory, optimizing the model for the final tasks.

Performance and Insights

MuRAG's performance significantly exceeds that of existing baseline models, especially in tasks involving full-wiki settings. While it has showcased an outstanding ability to incorporate external multimodal knowledge, there still exist challenges in purely image-centric queries. In-depth human analysis categorizes the model's errors, noting particular difficulty with counting and object recognition.

Conclusion

In summary, MuRAG marks an important step forward in multimodal open-domain QA tasks. Despite current limitations, its achievements underscore the model's potential as a streamlined and extendable solution for integrating multimodal data into pre-trained LLMs. This approach not only expands the knowledge base available to QA systems but also introduces a more context-aware and enriched data processing capability. Future work may focus on aligning pre-training objectives more closely with downstream applications to further enhance model performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenhu Chen (134 papers)
  2. Hexiang Hu (48 papers)
  3. Xi Chen (1035 papers)
  4. Pat Verga (16 papers)
  5. William W. Cohen (79 papers)
Citations (101)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets