Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents (2501.08828v2)

Published 15 Jan 2025 in cs.IR, cs.AI, cs.CL, and cs.CV

Abstract: Multimodal document retrieval aims to identify and retrieve various forms of multimodal content, such as figures, tables, charts, and layout information from extensive documents. Despite its increasing popularity, there is a notable lack of a comprehensive and robust benchmark to effectively evaluate the performance of systems in such tasks. To address this gap, this work introduces a new benchmark, named MMDocIR, that encompasses two distinct tasks: page-level and layout-level retrieval. The former evaluates the performance of identifying the most relevant pages within a long document, while the later assesses the ability of detecting specific layouts, providing a more fine-grained measure than whole-page analysis. A layout refers to a variety of elements, including textual paragraphs, equations, figures, tables, or charts. The MMDocIR benchmark comprises a rich dataset featuring 1,685 questions annotated by experts and 173,843 questions with bootstrapped labels, making it a valuable resource in multimodal document retrieval for both training and evaluation. Through rigorous experiments, we demonstrate that (i) visual retrievers significantly outperform their text counterparts, (ii) MMDocIR training set effectively enhances the performance of multimodal document retrieval and (iii) text retrievers leveraging VLM-text significantly outperforms retrievers relying on OCR-text. Our dataset is available at https://mmdocrag.github.io/MMDocIR/.

Summary

  • The paper presents MMDocIR, a benchmark that defines dual retrieval tasks to overcome quality and granularity limitations in multi-modal document retrieval.
  • It shows that visual-driven retrievers, especially using VLMs, outperform text-based methods in both page-level and layout-level tasks.
  • Experiments yield high annotation quality (F1 scores of 95.2% and 87.1%) and underscore the practical benefits of prioritizing visual information in retrieval.

The paper introduces the Multi-Modal Document Information Retrieval (MMDocIR) benchmark for evaluating multi-modal document retrieval systems. The authors highlight the limitations of existing benchmarks in terms of question quality, document quality, and retrieval granularity. To address these limitations, MMDocIR is structured around two tasks: page-level retrieval and layout-level retrieval. The page-level retrieval task aims to identify the most relevant pages within a document in response to a user query, while the layout-level retrieval task focuses on retrieving specific layouts, such as paragraphs, equations, figures, tables, and charts.

The MMDocIR benchmark includes an evaluation set comprising 313 documents with expert-annotated labels for 1,658 question-answer (QA) pairs, and a training set comprising 6,878 documents and labels for 73,843 QA pairs. The evaluation set is derived from the MMLongBench-Doc and DocBench datasets, with questions filtered and revised to suit document retrieval tasks. The annotation process involves page-level and layout-level labeling, with rigorous quality control measures in place, achieving an F1 score of 95.2% for page-level annotations and 87.1% for layout-level annotations.

The authors conduct experiments to evaluate existing multi-modal document retrieval baselines, which are categorized into visual-driven and text-driven retrievers. Visual-driven retrievers use Vision LLMs (VLMs) to generate embeddings for queries and documents, while text-driven retrievers rely on Optical Character Recognition (OCR) or VLMs to convert multi-modal content into text before employing LLMs (LMs). The experimental results demonstrate that visual-driven retrievers outperform text-driven retrievers. The authors also train two visual retrievers, DPR-Phi3 and Col-Phi3, based on Phi3-Vision, and evaluate their effectiveness using the MMDocIR training set.

The methodology involves an offline indexing phase, where each page and layout is transformed into a vector representation, and an online querying phase, where a query is converted into a vector and compared against the indexed vectors using similarity scores. The similarity between the query and the document is computed using cosine similarity for DPR-Phi3 and a maximum dot product for Col-Phi3.

Key findings from the experiments include:

  • Visual retrievers outperform text retrievers in page-level retrieval, highlighting the importance of visual elements.
  • VLM-text approaches, while underperforming visual retrievers, perform better than OCR-text methods.
  • Token-level retrievers achieve more advantageous results in Recall@1 compared to page-level counterparts but have higher storage overhead.
  • In layout-level retrieval, visual retrievers show performance advantages over text retrievers using OCR-text.
  • VLM-text approaches achieve comparable performance to visual retrievers in layout-level retrieval.
  • Hybrid image-text sequences in visual retrievers perform less effectively than pure image sequences.

The authors also analyze the differences between OCR and VLM text, noting that VLM-text is longer and more comprehensive, although it comes with higher computational overhead.

In summary, the paper presents the MMDocIR benchmark as a resource for advancing multi-modal document retrieval, with a dual-task retrieval framework and a comprehensive evaluation of existing retrieval systems. The results emphasize the importance of visual information in multi-modal document retrieval and highlight the potential benefits of using VLMs.

Variables used in the LaTeX formulas:

  • D\mathcal{D}: Document corpora
  • P\mathcal{P}: Set of document pages, P={p1,p2,,pn}\mathcal{P} = \{p_1, p_2, \dots, p_n\}
    • pip_i: individual page
    • nn: total number of pages
  • L\mathcal{L}: Set of layouts, L={l1,l2,,lm}\mathcal{L} = \{l_1, l_2, \dots, l_m\}
    • lil_i: individual layout
    • mm: total number of layouts
  • QQ: Query
  • Sim(Q,p)Sim(Q, p): Similarity score between query QQ and page pp
  • Sim(Q,l)Sim(Q, l): Similarity score between query QQ and layout ll
  • Eddpr\mathrm{E_{d}^{dpr}}: DPR embedding of the document
  • Eqdpr\mathrm{E_{q}^{dpr}}: DPR embedding of the query
  • Mphi3v\mathbf{M_{phi3v}}: Phi3-Vision model
  • Mvit\mathbf{M_{vit}}: ViT model
  • dd: document
  • qq: query
  • D1D_1: dimension of the last hidden state of $\mathbf{M_{phi3v}$
  • Edcol\mathrm{E_{d}^{col}}: ColBERT embedding of the document
  • Eqcol\mathrm{E_{q}^{col}}: ColBERT embedding of the query
  • Mproj\mathbf{M_{proj}}: Projection layer
  • D2D_2: Reduced dimension after projection
  • NdN_d: Number of document tokens
  • NqN_q: Number of query tokens
  • Sim(q,d)dpr\text{Sim}(q, d)_{dpr}: Similarity between query qq and document dd using DPR
  • \langle \cdot | \cdot \rangle: Dot product
  • \left \| \cdot \right \|: Norm
  • Sim(q,d)col\text{Sim}(q, d)_{col}: Similarity between query qq and document dd using ColBERT
  • $\mathrm{E_{q}^{col}^{(i)}}$: ii-th query vector
  • $\mathrm{E_{d}^{col}^{(j)}}$: jj-th document embedding vector
  • d+d^+: Positive document
  • dd^-: Negative document
  • L(q,d+,d)dpr\mathcal{L}^{dpr}_{(q, d^+, d^-)}: Loss for DPR-Phi3
  • τ\tau: Temperature parameter
  • L(q,d+,d)col\mathcal{L}^{col}_{(q, d^+, d^-)}: Loss for Col-Phi3