Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 186 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

PruneCD: Enhanced Contrastive Decoding

Updated 27 September 2025
  • PruneCD is a contrastive decoding framework for LLMs that uses targeted layer pruning to construct a more informative amateur model.
  • It mitigates hallucinations by generating sharper token probability contrasts and reducing flat, uninformative output distributions.
  • Empirical evaluations across benchmarks demonstrate that PruneCD enhances factuality with minimal inference overhead and seamless integration.

PruneCD is a contrastive decoding framework for LLMs that utilizes layer-pruned self-contrast to improve the factuality of generated text. In contrast to prior approaches—such as DoLa, which rely on early exit logits as a contrastive prior but suffer from flat and non-informative outputs—PruneCD constructs its “amateur” model by pruning selected intermediate layers from the full model. This layer-pruned design produces sharper, more meaningful contrast in the decoded token probabilities, significantly enhancing the capacity to mitigate hallucination while maintaining minimal inference overhead (Yu et al., 20 Sep 2025).

1. Motivation and Conceptual Foundations

LLMs often generate hallucinated or factually inconsistent content due to overconfidence in poorly grounded outputs. Contrastive decoding (CD) addresses this by penalizing tokens favored by an “amateur” model that is presumed to be more uncertain or less reliable than the “expert” model. Traditional CD implementations use early network exits for the amateur model, but analysis shows that such logits tend to be high-entropy, flat, and uninformative, resulting in weak contrast and limited improvement in factuality.

PruneCD overcomes these limitations by constructing the amateur model via targeted layer pruning. By removing specific intermediate layers (rather than truncating the layer stack from the top), the pruned model’s output distribution is less uniform and better aligned with the expert, yielding stronger and more informative contrast signals for decoding.

2. Methodology and Formalism

In PruneCD, the core mechanism consists of producing “expert” and “amateur” logits for each token generation step and combining them to produce a contrastive score. For a given input sequence x<tx_{<t} and candidate next token xtx_t, the score is evaluated as:

CDscore(xt;x<t)=logp(e)(xtx<t)λlogp(a)(xtx<t),\mathrm{CD}_{\text{score}}(x_t; x_{<t}) = \log p^{(e)}(x_t \mid x_{<t}) - \lambda \log p^{(a)}(x_t \mid x_{<t}),

where p(e)p^{(e)} is the probability from the expert model (using the full layer set L={L0,L1,,Ln1}\mathcal{L} = \{L_0, L_1, \ldots, L_{n-1}\}), and p(a)p^{(a)} is from the layer-pruned amateur model (using LS\mathcal{L} \setminus S for a pruned subset SLS \subset \mathcal{L}).

Layer selection for pruning is determined via a factual layer search, in which each candidate layer’s ablation is empirically assessed for impact on factuality scores using a metric on datasets such as TruthfulQA. The layers whose removal produces the greatest factuality drop are selected, and the top-kk such layers constitute SS.

3. Comparison with Early Exit and DoLa

Prior contrastive decoding (DoLa) employs early exit logits, relying on model outputs after halting forward propagation at an intermediate layer. Analysis reveals that such strategies yield distributions with high entropy (flatness):

H(p)=ipilogpi,H(p) = - \sum_i p_i \log p_i,

and extremely low overlap with the expert’s top-kk predicted tokens:

Ok=Topk(z(e))Topk(z(a)).O_k = \left| \mathrm{Top}_k(z^{(e)}) \cap \mathrm{Top}_k(z^{(a)}) \right|.

Empirical measurements show that early exit logits have entropy values orders of magnitude higher than full or layer-pruned models (e.g., 11.75 vs. 1.37), and the top-25 token overlap O25O_{25} is much lower (0.43 vs. 15.50). Layer-pruned logits in PruneCD retain substantially more structure and informative variation, directly addressing the flatness and informativeness deficits of early exit approaches.

4. Empirical Results and Analysis

Qualitative and quantitative analyses corroborate the superiority of layer pruning for constructing informative contrasts:

  • Visualization of logits and softmax outputs demonstrates that early exit logits are nearly uniform across the output space, whereas layer-pruned logits display differentiated structures more closely aligned with the expert model.
  • Over 1,000 TriviaQA samples, average entropy and top-25 overlap metrics validate that PruneCD’s amateur logits are substantially more informative.
  • Across benchmarks (TruthfulQA, TriviaQA, Natural Questions, GSM8K) and model scales (1B–8B parameters), PruneCD consistently improves factuality over greedy and DoLa decoding.
  • The offline search and batched evaluation of both expert and pruned logits enable the method to incur only minimal inference overhead relative to standard greedy decoding.

5. Implementation and Practical Integration

PruneCD is designed for practical deployment:

  • The factual layer search for the optimal pruning set SS is conducted in a brief offline phase, typically via one-at-a-time layer ablation and assessment of factuality loss.
  • During inference, PruneCD requires only a single forward pass (with a slightly modified computational graph) to obtain both expert and pruned logits, enabling efficient batched computation.
  • No additional models, external data, or extensive hyperparameter tuning are required, fostering drop-in compatibility with existing LLM decoding pipelines.

6. Implications and Applications

The PruneCD approach provides a robust, efficient method for reducing hallucination in LLMs. By leveraging a principled amateur construction via layer pruning, it achieves a better tradeoff between informativeness and uncertainty than early exit. This allows the contrastive penalty to more effectively suppress overconfident, factually unsupported outputs.

Applications include open-domain dialogue, factual QA, content generation, and any context where factual reliability is paramount. The minimal computational overhead and absence of extra training or calibration requirements make PruneCD particularly attractive for latency-sensitive or production-grade environments.

7. Summary

PruneCD advances the field of contrastive decoding for LLMs by replacing early exit amateur models with an empirically optimized, layer-pruned alternative. This design yields non-flat, informative contrast signals backed by rigorous entropy and overlap analysis, and demonstrates marked factuality improvement across benchmarks. Its practical efficiency and plug-and-play nature position it as a robust strategy for mitigating hallucinations in large-scale LLMs (Yu et al., 20 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to PruneCD.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube