Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly (2406.10638v2)

Published 15 Jun 2024 in cs.CV

Abstract: Multimodal LLMs (MLLMs) have displayed remarkable performance in multi-modal tasks, particularly in visual comprehension. However, we reveal that MLLMs often generate incorrect answers even when they understand the visual content. To this end, we manually construct a benchmark with 12 categories and design evaluation metrics that assess the degree of error in MLLM responses even when the visual content is seemingly understood. Based on this benchmark, we test 15 leading MLLMs and analyze the distribution of attention maps and logits of some MLLMs. Our investigation identifies two primary issues: 1) most instruction tuning datasets predominantly feature questions that 'directly' relate to the visual content, leading to a bias in MLLMs' responses to other indirect questions, and 2) MLLMs' attention to visual tokens is notably lower than to system and question tokens. We further observe that attention scores between questions and visual tokens as well as the model's confidence in the answers are lower in response to misleading questions than to straightforward ones. To address the first challenge, we introduce a paired positive and negative data construction pipeline to diversify the dataset. For the second challenge, we propose to enhance the model's focus on visual content during decoding by refining the text and visual prompt. For the text prompt, we propose a content guided refinement strategy that performs preliminary visual content analysis to generate structured information before answering the question. Additionally, we employ a visual attention refinement strategy that highlights question-relevant visual tokens to increase the model's attention to visual content that aligns with the question. Extensive experiments demonstrate that these challenges can be significantly mitigated with our proposed dataset and techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yexin Liu (25 papers)
  2. Zhengyang Liang (10 papers)
  3. Yueze Wang (14 papers)
  4. Muyang He (6 papers)
  5. Jian Li (667 papers)
  6. Bo Zhao (242 papers)
  7. Xianfeng Wu (8 papers)
  8. Feilong Tang (40 papers)
  9. Zheng Liu (312 papers)
  10. Harry Yang (27 papers)
  11. Sernam Lim (8 papers)
Citations (5)
Github Logo Streamline Icon: https://streamlinehq.com