Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data-Driven Calibration of Prediction Sets in Large Vision-Language Models Based on Inductive Conformal Prediction (2504.17671v3)

Published 24 Apr 2025 in cs.CL, cs.AI, and cs.LG

Abstract: This study addresses the critical challenge of hallucination mitigation in Large Vision-LLMs (LVLMs) for Visual Question Answering (VQA) tasks through a Split Conformal Prediction (SCP) framework. While LVLMs excel in multi-modal reasoning, their outputs often exhibit hallucinated content with high confidence, posing risks in safety-critical applications. We propose a model-agnostic uncertainty quantification method that integrates dynamic threshold calibration and cross-modal consistency verification. By partitioning data into calibration and test sets, the framework computes nonconformity scores to construct prediction sets with statistical guarantees under user-defined risk levels ($\alpha$). Key innovations include: (1) rigorous control of \textbf{marginal coverage} to ensure empirical error rates remain strictly below $\alpha$; (2) dynamic adjustment of prediction set sizes inversely with $\alpha$, filtering low-confidence outputs; (3) elimination of prior distribution assumptions and retraining requirements. Evaluations on benchmarks (ScienceQA, MMMU) with eight LVLMs demonstrate that SCP enforces theoretical guarantees across all $\alpha$ values. The framework achieves stable performance across varying calibration-to-test split ratios, underscoring its robustness for real-world deployment in healthcare, autonomous systems, and other safety-sensitive domains. This work bridges the gap between theoretical reliability and practical applicability in multi-modal AI systems, offering a scalable solution for hallucination detection and uncertainty-aware decision-making.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yuanchang Ye (1 paper)
  2. Weiyan Wen (1 paper)

Summary

Data-Driven Calibration of Prediction Sets in Large Vision-LLMs Based on Inductive Conformal Prediction

This paper presents a rigorous framework for addressing the challenge of hallucination in Large Vision-LLMs (LVLMs), particularly within Visual Question Answering (VQA) tasks. The method relies on a Split Conformal Prediction (SCP) approach, which is model-agnostic and effectively controls the marginal coverage of prediction sets. The framework is noteworthy for combining dynamic threshold calibration with cross-modal consistency verification, thus providing a robust mechanism to mitigate hallucinations that often occur with high confidence in LVLMs.

The paper commences with a critical examination of the vulnerability of LVLMs to hallucination when tasked with multi-modal reasoning, emphasizing the higher hallucination phenomena compared to uni-modal LLMs. The authors highlight the risks posed by such hallucinations, particularly in safety-critical sectors like healthcare and autonomous systems. These hallucinations can skew decision-making processes or introduce severe safety risks, necessitating efficient detection and quantification methods.

A key contribution is the integration of a model-agnostic uncertainty quantification technique that does not rest on prior distributional assumptions and does not necessitate retraining of LVLMs. The SCP framework partitions data into calibration and test sets, enabling the computation of nonconformity scores that define prediction sets with statistically guaranteed coverage at user-defined risk levels (α\alpha). Noteworthy innovations include rigorous control over empirical error rates, dynamic adjustment of prediction set sizes inversely related to α\alpha, and elimination of retraining requirements. Such measures enhance the robustness of LVLM predictions without sacrificing computational efficiency.

Empirical evaluations of this framework are conducted on the ScienceQA and MMMU benchmarks, deploying eight LVLMs from distinct model groups including LLaVA1.5, LLaVA-NeXT, Qwen2VL, and InternVL2. The findings demonstrate an effective enforcement of theoretical guarantees across all α\alpha values, corroborating the framework's capacity to maintain stable performance across different calibration-to-test split ratios. This robustness underscores its real-world applicability.

Results indicate that, even with a high error probability tolerance (e.g., α0.6\alpha \geq 0.6), models such as Qwen2-VL-7B-lnstruct maintain empirical error rates below the set threshold, which is crucial for applications demanding high reliability and precision. As α\alpha increases, the prediction sets become more compact, thus effectively filtering out low-confidence outputs and minimizing hallucination risks. This inverse correlation between α\alpha and prediction set size is especially beneficial for mitigating hallucinations in LVLMs, ensuring reliability in safety-sensitive applications.

In conclusion, this paper bridges theoretical reliability with practical applicability in multi-modal AI systems. It provides a scalable and efficient approach to hallucination detection and uncertainty-aware decision-making in LVLMs. Future work could focus on extending this framework to other multi-modal tasks and exploring its adaptability to varying data distributions in real-time environments. The capability to ensure statistically valid coverage dynamically is indispensable for the deployment of AI in sectors where safety is paramount.

Youtube Logo Streamline Icon: https://streamlinehq.com