Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models (2407.00569v4)

Published 30 Jun 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Though advanced in understanding visual information with human languages, Large Vision-LLMs (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs' subsequent generation. Thus, we raise a question: When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists? To answer this, we propose a framework called MMHalSnowball to evaluate LVLMs' behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least $31\%$, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this phenomenon Multimodal Hallucination Snowballing. To mitigate this, we further propose a training-free method called Residual Visual Decoding, where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than $24\%$ of the snowballed multimodal hallucination while maintaining capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Weihong Zhong (15 papers)
  2. Xiaocheng Feng (54 papers)
  3. Liang Zhao (353 papers)
  4. Qiming Li (4 papers)
  5. Lei Huang (175 papers)
  6. Yuxuan Gu (17 papers)
  7. Weitao Ma (12 papers)
  8. Yuan Xu (122 papers)
  9. Bing Qin (186 papers)
Citations (3)