Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Predictable Compression Failures: Why Language Models Actually Hallucinate (2509.11208v1)

Published 14 Sep 2025 in stat.ML and cs.LG

Abstract: LLMs perform near-Bayesian inference yet violate permutation invariance on exchangeable data. We resolve this by showing transformers minimize expected conditional description length (cross-entropy) over orderings, $\mathbb{E}\pi[\ell(Y \mid \Gamma\pi(X))]$, which admits a Kolmogorov-complexity interpretation up to additive constants, rather than the permutation-invariant description length $\ell(Y \mid X)$. This makes them Bayesian in expectation, not in realization. We derive (i) a Quantified Martingale Violation bound showing order-induced deviations scale as $O(\log n)$ with constants; (ii) the Expectation-level Decompression Law linking information budgets to reliability for Bernoulli predicates; and (iii) deployable planners (B2T/RoH/ISR) for answer/abstain decisions. Empirically, permutation dispersion follows $a+b\ln n$ (Qwen2-7B $b \approx 0.377$, Llama-3.1-8B $b \approx 0.147$); permutation mixtures improve ground-truth likelihood/accuracy; and randomized dose-response shows hallucinations drop by $\sim 0.13$ per additional nat. A pre-specified audit with a fixed ISR=1.0 achieves near-0\% hallucinations via calibrated refusal at 24\% abstention. The framework turns hallucinations into predictable compression failures and enables principled information budgeting.

Summary

  • The paper presents an information-theoretic framework that formalizes hallucinations as predictable compression failures arising from insufficient information budgets.
  • It introduces practical metrics like Bits-to-Trust, Risk-of-Hallucination, and Information Sufficiency Ratio to predict and mitigate hallucination risks.
  • Empirical results validate that permutation mixing and ISR gating notably reduce error rates and achieve near-zero hallucination rates.

Predictable Compression Failures: An Information-Theoretic Account of Hallucination in LLMs

Introduction

This paper provides a rigorous information-theoretic framework for understanding hallucinations in LLMs, reconciling the apparent paradox between their near-Bayesian inference capabilities and systematic violations of permutation invariance on exchangeable data. The central thesis is that transformers with positional encodings minimize expected conditional description length over input orderings, Eπ[(YΓπ(X))]E_\pi[\ell(Y|\Gamma_\pi(X))], rather than the permutation-invariant (YX)\ell(Y|X). This distinction implies that transformers are "Bayesian in expectation, not in realization," achieving optimal compression and calibration only when averaged over permutations. The framework formalizes hallucinations as predictable compression failures, quantifies their risk via information budgets, and introduces deployable planners for answer/abstain decisions.

Theoretical Framework

Conditional Complexity Minimization and Permutation Sensitivity

Transformers equipped with positional encodings process input sequences in an order-sensitive manner, violating the permutation invariance required for Bayesian inference on exchangeable data. The paper proves that, due to positional processing, transformers minimize the expected cross-entropy over permutations, not the cross-entropy for a fixed ordering. This is formalized via the Quantified Martingale Violation (QMV) theorem, which provides an explicit O(logn)O(\log n) upper bound (with constants) on order-induced deviations under harmonic positional decay. The analysis leverages Kolmogorov complexity, showing that the expected description length minimized by transformers admits a complexity-theoretic interpretation up to additive constants.

Expectation-level Decompression Law (EDFL)

The EDFL establishes a quantitative relationship between the information budget Δˉ\bar{\Delta} and the reliability of predictions for Bernoulli predicates. For rare events with prior mass qˉ1\bar{q} \ll 1, achieving reliability p=1ϵp = 1-\epsilon requires Δˉ(1ϵ)log(1/qˉ)\bar{\Delta} \geq (1-\epsilon)\log(1/\bar{q}) nats. This result transforms hallucination from an unpredictable failure mode into a quantifiable consequence of information insufficiency, providing exact bounds rather than binary success/failure criteria.

Operational Planners: B2T, RoH, ISR

Three practical metrics are derived from EDFL:

  • Bits-to-Trust (B2T): The information required for a target reliability hh^*.
  • Risk-of-Hallucination (RoH): The achievable error given an information budget.
  • Information Sufficiency Ratio (ISR): The ratio of available information to the required threshold, governing abstention decisions.

The ISR gating algorithm enables ex-ante prediction of hallucination risk and principled abstention, with a fixed threshold (ISR=1.0) analytically determined from the theory.

Empirical Validation

Permutation Mixtures and Dispersion Scaling

Experiments on a custom Factuality Slice dataset (3,059 QA items) demonstrate that permutation-induced dispersion in model predictions follows a+blnna + b\ln n scaling, confirming the QMV theorem. Qwen2-7B exhibits a dispersion slope b0.377b \approx 0.377, while Llama-3.1-8B shows b0.147b \approx 0.147, reflecting architectural differences. Uniform permutation mixtures strictly improve ground-truth likelihood and accuracy, with Jensen gaps of 0.1041 nats/token (Qwen2-7B) and 0.00982 nats/token (Llama-3.1-8B). The mixture-optimality gap is negligible (<104<10^{-4} nats/token), indicating near-MDL optimality within the permutation family.

Causal Dose-Response of Hallucination to Information Budget

Randomized experiments controlling information content (dose of support chunks) establish a causal relationship between information budget and hallucination rate. Each additional nat of information reduces hallucinations by approximately 0.13 (Qwen2-7B) and 0.11 (Llama-3.1-8B). The critical threshold for reliable answering aligns with the EDFL predictions.

Deployment Calibration via ISR Audits

A pre-specified audit on Gemma-2-9B (528 held-out items) demonstrates that ISR gating achieves near-0% hallucination (0.0–0.7%), 24% abstention, and 80.5% accuracy on attempted answers. Boundary alignment is robust (96.2%), and parameter sensitivity analyses confirm stability across reasonable ranges of permutations and clipping thresholds.

Practical Implications

Predictive and Preventive Hallucination Management

The framework enables practitioners to predict hallucination risk before generation and manage it through information budgeting. ISR gating provides a principled abstain/answer decision rule, and permutation averaging offers practical robustness with modest computational overhead. The approach is compatible with ensemble architectures, where tied-weight multi-branch compositions with averaging heads realize the theoretical benefits in practice.

Training-Time Regularization

A practical regularizer is proposed to minimize the positional Jensen penalty:

L+λExEπ,π[(logit qπ(x)logit qπ(x))2]\mathcal{L} + \lambda E_x E_{\pi,\pi'}[(\text{logit}~q_\pi(x) - \text{logit}~q_{\pi'}(x))^2]

This reduces prediction variance across permutations, complementing inference-time methods.

Limitations and Extensions

The analysis is sharpest for binary adjudication, as EDFL provides tight guarantees for Bernoulli events. Extension to multi-class settings is possible via one-vs-rest decomposition, but further work is needed for structured outputs. The relationship between model scale and positional bias warrants systematic investigation.

Theoretical Implications

The results resolve the paradox of LLMs being Bayesian in expectation but not in realization, due to positional encodings. Local rank stability and adjacent-swap stability assumptions are satisfied by common positional encoding schemes, ensuring logarithmic scaling of permutation-induced deviations. Assumption-free JS-based certificates provide per-item safety guarantees, and hallucinations are shown to be deterministic consequences of insufficient information, not stochastic errors.

Future Directions

The framework opens avenues for:

  • Integrating EDFL/ISR into evaluation suites for LLM deployment.
  • Developing architectures with reduced positional sensitivity.
  • Extending the theory to multi-class and structured prediction tasks.
  • Investigating the impact of model scale and architecture on permutation sensitivity and information budgeting.

Conclusion

This work presents a unified information-theoretic account of hallucination in LLMs, showing that hallucinations are predictable compression failures arising from insufficient information for rare events. Transformers are Bayesian in expectation due to positional processing, with explicit O(logn)O(\log n) bounds on permutation-induced deviations. The EDFL quantifies hallucination risk, and operational planners enable principled prediction and prevention. The architectural closure theorem ensures these benefits are achievable in practice via ensemble architectures. This framework provides both theoretical insight and practical tools for reliable LLM deployment.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 10 posts and received 20 likes.