Papers
Topics
Authors
Recent
Search
2000 character limit reached

Demystifying OPD: Length Inflation and Stabilization Strategies for Large Language Models

Published 9 Apr 2026 in cs.CL and cs.LG | (2604.08527v1)

Abstract: On-policy distillation (OPD) trains student models under their own induced distribution while leveraging supervision from stronger teachers. We identify a failure mode of OPD: as training progresses, on-policy rollouts can undergo abrupt length inflation, causing truncated trajectories to dominate the training data. This truncation collapse coincides with abrupt repetition saturation and induces biased gradient signals, leading to severe training instability and sharp degradation in validation performance. We attribute this problem to the interaction between student-induced data collection and the distillation objective, which implicitly favors long and repetitive rollouts. To address this issue, we propose StableOPD, a stabilized OPD framework that combines a reference-based divergence constraint with rollout mixture distillation. These together mitigate repetition-induced length inflation and further stabilize OPD training. Across multiple math reasoning datasets, our approach prevents truncation collapse, stabilizes training dynamics, and improves performance by 7.2% on average.

Summary

  • The paper identifies a failure mode in on-policy distillation where repetitive tokens inflate rollout lengths and degrade generalization in reasoning tasks.
  • The paper proposes Stable-OPD, incorporating mixture distillation and KL-divergence regularization to anchor parameter updates and counteract degenerate trajectories.
  • The paper demonstrates that Stable-OPD achieves a mean performance gain of 7.2% on mathematical benchmarks compared to standard OPD methods.

Demystifying OPD: Failure Modes and Stabilization for LLMs

Introduction

On-policy distillation (OPD) has become a preferred protocol for aligning student LLMs with stronger teacher models by updating student parameters based on rollouts sampled from its own policy. This training regime enables the student to learn in the distribution it will encounter at inference, thereby reducing train-test mismatch. However, this paper identifies and systematically analyzes a critical failure mode unique to OPD—rollout length inflation via repetition saturation—which severely destabilizes optimization and degrades generalization, especially in mathematical reasoning tasks (2604.08527).

Characterization of OPD Pathologies

The core empirical observation is that, as OPD progresses, student-generated trajectories exhibit abrupt length inflation. Instead of terminating naturally or generating diverse output, a large proportion of rollouts are truncated by a hard length cutoff due to the emergence of highly repetitive, low-entropy continuations. The paper demonstrates that this truncation-repetition regime is consistently observed across datasets and student-teacher configurations.

Mechanistically, this instability arises from an interaction between the student-generated data distribution and the likelihood-based OPD objective. Specifically, once the student enters repetitive states, the token-level reverse-KL advantage becomes systematically larger for repetitive tokens compared to regular tokens. This disproportionate reward results as the student diverges from the teacher’s likelihood profile, making repetitive tokens appear highly improvable under the OPD signal. As the frequency of such tokens increases, their influence on parameter updates dominates, formulating a positive-feedback pathology: repetition begets more repetition, ultimately saturating training and inducing biased gradients.

Critically, these issues are not just a manifestation of length bias as studied in RL-based LLM optimization. The root cause is the inherent coupling of on-policy sampling and likelihood signaling in OPD when the student “hacks” the objective by exploiting misspecified rewards on degenerate trajectories.

Stable-OPD: Mitigation via Mixture Distillation and Divergence Constraints

To explicitly address this failure mode, the authors propose Stable-OPD, which incorporates two stabilization mechanisms:

  1. Mixture Distillation: A fraction of each training minibatch contains high-quality, complete, non-repetitive “golden” demonstrations mixed with on-policy rollouts. This anchors the student’s parameter updates toward desirable behaviors, rebalancing gradients away from degenerate long rollouts.
  2. KL-Divergence Regularization: A reference policy (typically the initialization checkpoint) is used to penalize excessive drift at the prefix level via an additional KL regularization term. This constraint directly counteracts the rapid policy shifts associated with abrupt length inflation.

Ablation studies reveal that each component independently increases stability and performance, but their combination is necessary for robust suppression of truncation-repetition collapse.

Experimental Evaluation

The stabilization effect of Stable-OPD is strongly evidenced by consistent improvements in validation accuracy and attenuation of rollout truncation and repetition rates. On a suite of mathematical reasoning benchmarks—AIME, AMC, Minerva, OlympiadBench, and MATH500—Stable-OPD yields a mean performance gain of 7.2% over standard OPD. This gain is consistent across both 1.5B and 7B parameter student backbones, and the stabilized method outperforms specialized RLVR pipelines including GRPO, SimpleRL-Zero, Oat-Zero, PRIME-Zero, and OpenReasonerZero.

Quantitatively, the baseline OPD method trails conventional SFT and GRPO in accuracy despite its access to dense supervision. The addition of mixture distillation and KL regularization reverses this trend, establishing a new state-of-the-art for on-policy student-teacher distillation in reasoning tasks. Importantly, the amelioration of training collapse is robust across diverse teacher choices, further validating the generality of the diagnosis and proposed interventions.

Theoretical and Practical Implications

The analysis clarifies that OPD’s instabilities are not attributed to the generic sequence-length bias observed in RL training, but instead emerge from self-reinforcing feedback in the distribution of rollouts and token-level rewards when distillation occurs under on-policy sampling. The framework and proposed remedies apply beyond mathematical reasoning and suggest general principles for stable policy distillation: gradient signal distribution must be actively managed to prevent collapse into suboptimal attractors.

On the practical side, the empirical results demonstrate that incorporating even modest fractions of golden trajectories and explicit regularization suffice to suppress induced biases and stabilize optimization, without requiring a fundamental overhaul of the OPD protocol or resorting to more complex RLVR reward engineering.

Future Directions

The work points toward several promising avenues for future research in LLM alignment and distillation. The discovered failure mode underscores the need for better theoretical analysis of the interplay between student data distributions and teacher-driven reward signals in large-scale on-policy optimization. Extending the proposed stabilization framework to open-ended generation, multi-turn dialogue, or tasks with non-verifiable rewards is highly relevant for broader generalization. Additionally, dynamically scheduling the mixture ratio, or learning the reference policy in a more adaptive manner, offers potential for further improved robustness.

Conclusion

This paper provides a rigorous diagnosis of a critical training pathology—length inflation caused by repetition saturation—in on-policy distillation of LLMs. It exposes the limitations of standard OPD, offering principled and empirically validated stabilization strategies. By formalizing and preventing unintended feedback loops in gradient signaling, the Stable-OPD protocol both advances the state of the art on mathematical reasoning tasks and offers broadly relevant insights into stable and effective training of LLMs via teacher signal alignment (2604.08527).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.