Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization (2311.16839v2)

Published 28 Nov 2023 in cs.CV and cs.CL

Abstract: Multimodal LLMs have made significant advancements in recent years, yet they still suffer from a common issue known as the "hallucination problem", in which the models generate textual descriptions that inaccurately depict or entirely fabricate content from associated images. This paper introduces a novel solution, Hallucination-Aware Direct Preference Optimization (HA-DPO), which reframes the hallucination problem as a preference selection task. The model is trained to favor the non-hallucinating response when presented with two responses of the same image (one accurate and one hallucinatory). Furthermore, this paper proposes an efficient pipeline for constructing positive~(non-hallucinatory) and negative~(hallucinatory) sample pairs, ensuring a high-quality, style-consistent dataset for robust preference learning. When applied to three mainstream multimodal models, HA-DPO significantly reduced hallucination issues and amplified the models' generalization capabilities. Notably, the MiniGPT-4 model, when enhanced with HA-DPO, demonstrated a substantial improvement: POPE accuracy rose from 51.13% to 86.13% (an absolute improvement of 35%), and the MME score surged from 932.00 to 1326.46 (a relative improvement of 42.32%). The codes, models, and datasets are made accessible at https://opendatalab.github.io/HA-DPO.

Enhancement of Multimodal LLMs through Hallucination-Aware Direct Preference Optimization

The paper under review addresses the persistent hallucination problem observed in multimodal LLMs (LVLMs), particularly in the generation of image-based textual descriptions. Despite the advancements in LVLMs, the hallucination issue — where models fabricate or inaccurately depict image-related content — remains a significant challenge, potentially leading to serious misleading information in applications like medical diagnostics.

The authors propose a novel framework called Hallucination-Aware Direct Preference Optimization (HA-DPO) that reframes the mitigation of hallucinations as a preference optimization problem. HA-DPO biases the model towards generating non-hallucinated responses by presenting the model with pairs of responses for the same image: one hallucinatory and one accurate. This process is facilitated by a curated pipeline for constructing these preference pairs, ensuring dataset consistency in style and quality.

Through empirical validation, HA-DPO demonstrated substantial improvements in reducing hallucination occurrences in several state-of-the-art LVLMs. Specifically, when applied to MiniGPT-4, the POPE accuracy was improved from 51.13% to 86.13%, indicating an absolute gain of 35%. Additionally, the MME score showed a relative increase of 42.32%, from 932.00 to 1326.46. These results underscore the efficacy of HA-DPO not only in curbing hallucinations but also in enhancing the models' generalization capabilities.

The paper provides significant contributions to both the practical and theoretical domain of multimodal AI research. Practically, HA-DPO offers a lightweight, scalable solution to hallucination issues without the need for extensive and costly data annotation or retraining processes. Theoretically, the preference learning strategy in HA-DPO augments the understanding of model biases, facilitating advancements in model alignment and reliability.

Furthermore, the introduction of the Sentence-level Hallucination Ratio (SHR) offers a comprehensive and quantitative framework for evaluating hallucinations in LVLMs, expanding beyond existing benchmarks which typically focus on predefined categories and fail to account for a broader range of hallucination types. SHR enables researchers to effectively quantify hallucinations by evaluating sentence-level discrepancies against factual image data, thus providing an intuitive metric for further research and model development.

In terms of future developments, HA-DPO holds promise for broader application across other modalities and can potentially be adapted for the improvement of more generalized LLMs. As the field of AI continues to evolve, employing strategies like HA-DPO to address model biases and enhance model-objectivity will be critical for reliable deployment in real-world scenarios where accuracy and context are paramount.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhiyuan Zhao (54 papers)
  2. Bin Wang (750 papers)
  3. Linke Ouyang (12 papers)
  4. Xiaoyi Dong (73 papers)
  5. Jiaqi Wang (218 papers)
  6. Conghui He (114 papers)
Citations (77)