Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Hallucination Detection through Noise Injection (2502.03799v2)

Published 6 Feb 2025 in cs.CL, cs.SY, and eess.SY

Abstract: LLMs are prone to generating plausible yet incorrect responses, known as hallucinations. Effectively detecting hallucinations is therefore crucial for the safe deployment of LLMs. Recent research has linked hallucinations to model uncertainty, suggesting that hallucinations can be detected by measuring dispersion over answer distributions obtained from a set of samples drawn from a model. While drawing from the distribution over tokens defined by the model is a natural way to obtain samples, in this work, we argue that it is sub-optimal for the purpose of detecting hallucinations. We show that detection can be improved significantly by taking into account model uncertainty in the Bayesian sense. To this end, we propose a very simple and efficient approach that perturbs an appropriate subset of model parameters, or equivalently hidden unit activations, during sampling. We demonstrate its effectiveness across a wide range of datasets and model architectures.

Enhancing Hallucination Detection in LLMs through Noise Injection

The phenomenon of hallucinations in LLMs poses significant challenges for their safe deployment. Hallucinations refer to situations where LLMs generate responses that are coherent yet factually incorrect. With the increasing reliance on these models in various applications, effective detection of hallucinations is paramount. The paper "Enhancing Hallucination Detection through Noise Injection" seeks to improve hallucination detection by introducing an innovative approach that involves noise injection at specific layers of the model, thereby complementing traditional sampling techniques for measuring model uncertainty.

Overview of Hallucination Detection in LLMs

Hallucination detection in LLMs is often approached through the lens of model uncertainty. Previous research has suggested that higher uncertainty in generated responses may indicate a higher likelihood of hallucinations. Traditional approaches typically involve analyzing the uncertainty of the model's output by sampling several responses and calculating predictive or lexical entropy to measure divergence or inconsistency across samples. This method predominantly relies on variations introduced at the prediction layer by sampling from the model's probability distribution over possible next tokens.

Noise Injection as a Complementary Source of Randomness

The authors of this paper propose a reduction in reliance on the traditional method, which may be overly restrictive. Instead, they introduce a novel approach that incorporates noise injection into the hidden representations of intermediate layers within the model. This method introduces additional randomness earlier in the computational process, which is hypothesized to better capture variations that lead to hallucinations. The paper demonstrated that perturbing these intermediate representations can provide a complementary effect to traditional prediction layer sampling, enhancing the accuracy of hallucination detection.

Experimental Validation and Results

The paper presents an extensive empirical analysis involving various datasets and LLM architectures, such as LLaMA and Mistral models. The analysis shows a consistent improvement in the detection of hallucination instances when the noise injection method is employed alongside traditional sampling approaches. The experiments employed multiple uncertainty metrics, including predictive entropy and lexical similarity, assessing performance on math reasoning tasks (e.g., GSM8K) and trivia question-answering datasets (e.g., TriviaQA).

Key numerical results indicate that the combination of noise injection and prediction layer sampling yields higher AUROC values, indicative of improved hallucination detection effectiveness. Furthermore, noise injection did not detract from the models' accuracy on standard reasoning tasks, with an observed improvement in certain cases due to increased robustness against hallucination-induced variance.

Implications and Future Directions

The paper's findings have several implications for both the theoretical understanding and practical applications of LLMs. Theoretically, the paper suggests a new dimension for uncertainty assessment, with intermediate layer representations offering additional informative signals. Practically, this research opens the avenue for developing more reliable LLM systems, crucial for sensitive applications where factual correctness is imperative.

Looking ahead, this approach could be further explored across a broader range of model architectures and applications. Additionally, fine-tuning the noise injection mechanism, such as varying noise levels and injecting noise into alternative model components, presents a rich area for future investigation. This ongoing research will likely contribute to a refined understanding of model behavior under uncertainty and foster the development of more robust LLMs.

In summary, this paper presents a sophisticated method for enhancing hallucination detection in LLMs, offering valuable insights that could inform the design of future models and applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Litian Liu (8 papers)
  2. Reza Pourreza (18 papers)
  3. Sunny Panchal (8 papers)
  4. Apratim Bhattacharyya (22 papers)
  5. Yao Qin (41 papers)
  6. Roland Memisevic (36 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets