Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models (2401.13311v3)

Published 24 Jan 2024 in cs.CV, cs.AI, and cs.LG
ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models

Abstract: Many real-world tasks require an agent to reason jointly over text and visual objects, (e.g., navigating in public spaces), which we refer to as context-sensitive text-rich visual reasoning. Specifically, these tasks require an understanding of the context in which the text interacts with visual elements within an image. However, there is a lack of existing datasets to benchmark the state-of-the-art multimodal models' capability on context-sensitive text-rich visual reasoning. In this paper, we introduce ConTextual, a novel dataset featuring human-crafted instructions that require context-sensitive reasoning for text-rich images. We conduct experiments to assess the performance of 14 foundation models (GPT-4V, Gemini-Pro-Vision, LLaVA-Next) and establish a human performance baseline. Further, we perform human evaluations of the model responses and observe a significant performance gap of 30.8% between GPT-4V (the current best-performing Large Multimodal Model) and human performance. Our fine-grained analysis reveals that GPT-4V encounters difficulties interpreting time-related data and infographics. However, it demonstrates proficiency in comprehending abstract visual contexts such as memes and quotes. Finally, our qualitative analysis uncovers various factors contributing to poor performance including lack of precise visual perception and hallucinations. Our dataset, code, and leaderboard can be found on the project page https://con-textual.github.io/

Evaluation of Context-Sensitive Text-Rich Visual Reasoning

Introduction

The advent of instruction-tuned large multimodal models (LMMs) has led to heightened capabilities in responding to human instructions over images. Recent datasets have focused on assessing the Optical Character Recognition (OCR) ability of models, but this falls short in testing the full potential of LMMs to jointly reason over the text and visual context in an image. To bridge this gap, the paper introduces the benchmark C ON TEXTUAL, designed to evaluate the LMMs’ ability to perform context-sensitive reasoning over diverse and challenging real-world scenarios.

C ON T EXTUAL Dataset

C ON TEXTUAL consists of 506 challenging instructions testing LMMs across eight visual scenarios representing daily-life natural or digital scenes. This dataset demands a joint reasoning between the textual and visual cues, something prior datasets do not incentivize sufficiently. The instructions include open-ended questions and imperative tasks, demanding extractive as well as reasoning capabilities beyond information extraction, including mathematical reasoning.

Experimental Setup and Findings

A comprehensive set of experiments were conducted with 13 foundation models, including both proprietary (e.g., GPT-4V, Gemini-Pro-Vision) and open LMMs (e.g., LLaVA-1.5). The findings indicate GPT-4V(ision) outstripping other LMMs, even though it still lags behind human performance by 30.8%. There is a notable performance disparity in open LMMs in comparison to proprietary models, pointing to a need for future advancements that narrow this divide.

Model Performance and Analysis

Qualitative analysis elucidates a range of performance levels, with GPT4V and Gemini-Pro-Vision showcasing superior context-sensitive text-rich visual reasoning, whereas open-source LMMs underperform considerably. The analysis further helps identify issues like hallucination and lack of grounding instructions to the image. Interestingly, in certain abstract categories like memes and quotes, GPT-4V exceeds human performance, indicating the potential for tuning LMMs for better visual context understanding. Additionally, the benchmark C ON T EXTUAL demonstrates the challenging nature and gap present in modern LMMs when it comes to context-sensitive text-rich visual reasoning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rohan Wadhawan (5 papers)
  2. Hritik Bansal (38 papers)
  3. Kai-Wei Chang (292 papers)
  4. Nanyun Peng (205 papers)
Citations (10)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub

Youtube Logo Streamline Icon: https://streamlinehq.com