Papers
Topics
Authors
Recent
Search
2000 character limit reached

Perplexity Cannot Always Tell Right from Wrong

Published 30 Jan 2026 in cs.LG, cs.AI, cs.CL, and stat.ML | (2601.22950v1)

Abstract: Perplexity -- a function measuring a model's overall level of "surprise" when encountering a particular output -- has gained significant traction in recent years, both as a loss function and as a simple-to-compute metric of model quality. Prior studies have pointed out several limitations of perplexity, often from an empirical manner. Here we leverage recent results on Transformer continuity to show in a rigorous manner how perplexity may be an unsuitable metric for model selection. Specifically, we prove that, if there is any sequence that a compact decoder-only Transformer model predicts accurately and confidently -- a necessary pre-requisite for strong generalisation -- it must imply existence of another sequence with very low perplexity, but not predicted correctly by that same model. Further, by analytically studying iso-perplexity plots, we find that perplexity will not always select for the more accurate model -- rather, any increase in model confidence must be accompanied by a commensurate rise in accuracy for the new model to be selected.

Summary

  • The paper shows that perplexity conflates model confidence with correctness, undermining its effectiveness as a selection metric.
  • It introduces iso-perplexity curves to formally link accuracy and confidence, revealing calibration issues in transformer performance.
  • Empirical evidence from synthetic and real-world tasks confirms that low perplexity does not always correspond to high accuracy.

Limitations of Perplexity in Model Selection for Transformer-based LLMs

Context and Motivation

Perplexity has long served as the de facto measure for evaluating and selecting autoregressive LLMs. As both a loss function and as a practical metric, it quantifies the "surprise" of a model with respect to reference sequences, underpinning comparative evaluations of model progress, checkpoint selection, and even driving architectural decisions. However, despite its theoretical convenience, the alignment of perplexity with true model performance—particularly accuracy—remains questionable. This paper provides a rigorous, formal dissection of perplexity's shortcomings, specifically within the regime of compact decoder-only Transformer architectures, and establishes mathematical and empirical evidence that perplexity cannot reliably distinguish "right" from "wrong" in model selection.

Theoretical Foundations: Perplexity, Confidence, and Predictive Errors

Central to the analysis is the recognition that perplexity conflates correctness (accuracy) and confidence. The authors leverage recent continuity theorems for Transformer architectures with compact position embeddings [Pasten et al., 2025], revealing a formal result: if such a Transformer can confidently and correctly predict any sufficiently long sequence, there must always exist another sequence for which the model's perplexity is low—approaching that of the correctly predicted sequence—yet on which the model is systematically wrong.

This stems from the continuity property: high-confidence predictions for one input sequence propagate to "close" sequences due to smoothness, even if those sequences are not correctly predicted. Consequently, error signals for these confounded sequences are diluted, leading to vanishing gradients and obstructing correction through further training.

Moreover, the analysis generalizes to stochastic sampling regimes. By modulating sampling temperature, the authors show that the existence and indistinguishability (in perplexity space) of confidently erroneous outputs persists, with probability bounds associated to the model's confidence parameter.

Iso-Perplexity Analysis: Decoupling Accuracy and Confidence

A key analytic contribution is the study of iso-perplexity curves in the accuracy–confidence plane. Under plausible homogeneity assumptions, the authors derive closed-form expressions relating accuracy and model confidence to perplexity. Critically, for a gain in model confidence to yield an improvement in perplexity, a commensurate increase in accuracy is required; otherwise, perplexity penalizes the increased confidence, regardless of the actual improvement in predictive correctness.

Distinct "unfavourable regions" are identified:

  • A model can be strictly more accurate yet receive a higher (worse) perplexity than a less accurate baseline if it does not achieve sufficient confidence gains.
  • Conversely, a model with higher confidence and strictly lower accuracy can attain better perplexity scores—a misalignment that directly undermines perplexity as a discriminative objective.

This analytic framework explains how calibration (or miscalibration) dynamics interact with perplexity, particularly when models are pushed to be more confident (a common optimization incentive during maximum likelihood training).

Empirical Evidence: Synthetic and Real-World Tasks

The theoretical results are validated empirically using both synthetic (bitstring copy task) and realistic (Gemma 3 4B LLM, Transformer parity tasks) benchmarks. In both small-vocabulary and large-scale language modeling settings, it is shown concretely that as sequence length increases, the perplexity gap between correct and systematically incorrect sequences collapses, despite deterministic misprediction. This holds across both greedy and stochastic sampling. Perplexity thus provides insufficient granularity to surface these failures.

Crucially, when evaluating Transformer checkpoints on in-distribution versus OOD data (e.g., parity task), the correlation between accuracy and perplexity strongly diverges. In-distribution, the two are well anticorrelated (Pearson r0.94r \approx -0.94), reflecting meaningful loss improvements as accuracy rises. OOD, however, the correlation is not just weak, but inverts—higher accuracy models may have higher perplexity, and vice versa. This is especially pronounced for low-entropy (high-confidence) model regimes, reaffirming that distribution drift and calibration issues compound the metric's unreliability.

Implications for Theory and Practice

The findings carry direct consequences for the practical evaluation and deployment of Transformers and other autoregressive sequence models:

  • Model Selection: Reliance on perplexity as the primary selection criterion is inadvisable, especially for models anticipated to see OOD data or for tasks where accuracy is not directly accessible. Practitioners risk preferring confident-but-wrong models.
  • Calibration and Training: From a training dynamics perspective, driving models to high-confidence regimes risks locking in "silent" errors where mispredictions are masked by smoothing in the perplexity objective, prohibiting recovery by gradient methods.
  • Long-Context and Distributional Generalization: The results extend and provide theoretical backing to recent empirical critiques of perplexity in the context of long-range dependencies and context window scaling. They also align with observed phenomena such as "lost in the middle" failures and ineffective retrieval in extended-context tasks, where models can maintain low perplexity despite retrieval breakdown.
  • Metric Development: The paper demonstrates the necessity for alternative or augmented evaluation metrics—metrics that decouple calibration from correctness and that can meaningfully interrogate model behavior under distribution shift or adversarial conditions. While suggestions such as context-local variants of perplexity exist, a comprehensive replacement remains an open problem.

Conclusion

This work establishes, through both rigorous theoretical analysis and empirical validation, that perplexity alone is an insufficient and sometimes misleading indicator of model performance for compact decoder-only Transformers. The metric's conflation of confidence and correctness, together with dynamic calibration mismatches during training and deployment, ensure that more accurate models may not always be selected. As Transformer-based LLMs are increasingly trusted for high-stakes, generalization-challenging applications, evaluation frameworks must evolve beyond perplexity to ensure robust, reliable model deployment.


Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 14 tweets with 50 likes about this paper.