Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection (2506.05243v1)

Published 5 Jun 2025 in cs.CL

Abstract: A common approach to hallucination detection casts it as a natural language inference (NLI) task, often using LLMs to classify whether the generated text is entailed by corresponding reference texts. Since entailment classification is a complex reasoning task, one would expect that LLMs could benefit from generating an explicit reasoning process, as in CoT reasoning or the explicit ``thinking'' of recent reasoning models. In this work, we propose that guiding such models to perform a systematic and comprehensive reasoning process -- one that both decomposes the text into smaller facts and also finds evidence in the source for each fact -- allows models to execute much finer-grained and accurate entailment decisions, leading to increased performance. To that end, we define a 3-step reasoning process, consisting of (i) claim decomposition, (ii) sub-claim attribution and entailment classification, and (iii) aggregated classification, showing that such guided reasoning indeed yields improved hallucination detection. Following this reasoning framework, we introduce an analysis scheme, consisting of several metrics that measure the quality of the intermediate reasoning steps, which provided additional empirical evidence for the improved quality of our guided reasoning scheme.

Summary

  • The paper presents CLATTER, a three-step framework that decomposes claims and classifies entailment to enhance hallucination detection in LLMs.
  • It introduces new metrics such as atomicity and aggregation coherence to evaluate the quality of intermediate reasoning steps.
  • Empirical analysis demonstrates that CLATTER significantly improves entailment classification accuracy and reduces unsupported claims in generated text.

Overview of "CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection"

The paper introduces "CLATTER" (Claim Localization and ATTribution for Entailment Reasoning), a structured methodology for enhancing hallucination detection in LLMs. This work addresses the challenge of detecting unsupported claims, known as hallucinations, in generated text. It approaches this by viewing hallucination detection through the lens of natural language inference (NLI) and proposes a systematic reasoning process to improve the accuracy of entailment classifications by LLMs.

Key Contributions

  1. Reasoning Framework: The authors delineate a three-step method to decompose and evaluate claims:
    • Decomposition: The hypothesis is divided into smaller sub-claims or atomic facts, each representing a distinct verifiable truth against a source text.
    • Attribution and Entailment Classification: Each decomposed sub-claim is scrutinized against the source for supporting, refuting, or neutral evidence, followed by entailment classification.
    • Aggregation: Aggregating the entailment outcomes of the sub-claims leads to a comprehensive assessment of the overall claim.
  2. Metrics for Evaluation: The paper offers new metrics to evaluate the quality of intermediate steps within this reasoning process. These include atomicity (granularity of decomposition), soundness (integrity of sub-claims), completeness, attribution (correctness of evidence identification), entailment (classification accuracy), and aggregation coherence.
  3. Empirical Analysis: By applying their framework, the authors observe improved detection accuracy of unsupported claims across various datasets, showing that CLATTER-guided reasoning can enhance both entailment classification accuracy and the validity of reasoning processes.

Results

The experimental results highlight CLATTER's ability to improve hallucination detection performance in LLMs. Through structured decomposition and detailed attribution processes, the model outperformed baseline approaches that did not utilize comprehensive reasoning instructions. The improvement is particularly pronounced in models trained for reasoning tasks (LRMs), suggesting that these models benefit more from structured guidance. This indicates that an articulated reasoning process enhances model performance in complex tasks requiring fine-grained analysis, such as factual consistency and hallucination detection.

Implications and Future Developments

The practical implications of this research are significant for applications requiring reliable textual generations, such as automated summarization and conversational agents. By reducing hallucination prevalence, CLATTER can bolster trust in AI-generated content, an essential factor for broader societal acceptance of AI technologies.

In terms of theoretical implications, the work questions the adequacy of unguided reasoning processes in current LLMs and suggests that model training that incorporates structured reasoning may be more effective. It also offers insights into the potential utility of detailed intermediate reasoning in enhancing other NLP tasks that depend on entailment prediction and verification processes.

Future work could explore the adaptation of CLATTER to different domains or types of LLMs and investigate how its framework could be applied beyond NLI tasks, such as editing and revising generated text, or even extending its principles to other forms of machine reasoning beyond LLMs.

Overall, the CLATTER framework represents a notable advancement in improving the robustness and reliability of LLMs' judgment on factual entailment, emphasizing that structured reasoning is pivotal for advancing model performance in complex reasoning tasks.