Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward (2509.07430v1)

Published 9 Sep 2025 in cs.LG and cs.AI

Abstract: A central paradox in fine-tuning LLMs with Reinforcement Learning with Verifiable Reward (RLVR) is the frequent degradation of multi-attempt performance (Pass@k) despite improvements in single-attempt accuracy (Pass@1). This is often accompanied by catastrophic forgetting, where models lose previously acquired skills. While various methods have been proposed, the choice and function of the divergence term have been surprisingly unexamined as a proactive solution. We argue that standard RLVR objectives -- both those using the mode-seeking reverse KL-divergence and those forgoing a divergence term entirely -- lack a crucial mechanism for knowledge retention. The reverse-KL actively accelerates this decay by narrowing the policy, while its absence provides no safeguard against the model drifting from its diverse knowledge base. We propose a fundamental shift in perspective: using the divergence term itself as the solution. Our framework, Diversity-Preserving Hybrid RL (DPH-RL), leverages mass-covering f-divergences (like forward-KL and JS-divergence) to function as a rehearsal mechanism. By continuously referencing the initial policy, this approach forces the model to maintain broad solution coverage. Extensive experiments on math and SQL generation demonstrate that DPH-RL not only resolves the Pass@k degradation but improves both Pass@1 and Pass@k in- and out-of-domain. Additionally, DPH-RL is more training-efficient because it computes f-divergence using generator functions, requiring only sampling from the initial policy and no online reference model. Our work highlights a crucial, overlooked axis for improving RLVR, demonstrating that the proper selection of a divergence measure is a powerful tool for building more general and diverse reasoning models.

Summary

  • The paper shows that selecting alternative divergence measures significantly mitigates diversity collapse in RLVR.
  • DPH-RL employs a two-stage methodology with pre-sampling and online training integrating forward-KL and JS divergence penalties.
  • Empirical results using models like Llama on SQL tasks highlight improved multi-attempt performance and enhanced knowledge retention.

The Choice of Divergence: Mitigating Diversity Collapse in Reinforcement Learning

Overview of Reinforcement Learning with Verifiable Reward (RLVR)

This paper presents a significant analysis of the issues surrounding diversity collapse in Reinforcement Learning with Verifiable Reward (RLVR) when applied to LLMs. Despite RLVR's promise to improve single-attempt accuracy (Pass@1), a paradoxical decline in multi-attempt performance (Pass@k) is often observed, attributed to an entropy collapse that narrows the solution diversity. This research identifies the critical role of divergence measures in addressing these issues and proposes Diversity-Preserving Hybrid RL (DPH-RL), which leverages mass-covering f-divergences such as forward-KL and JS-divergence.

Re-examining Reverse-KL Divergence

Traditionally, most RLVR approaches use reverse-KL divergence in their objectives, a mode-seeking function that theoretically biases the policy towards fewer, potentially higher-probability solutions while sacrificing broader solution exploration. As demonstrated in the left panel of Figure 1, this approach accelerates knowledge decay and limits diversity on both in-domain and out-of-domain tasks due to policy narrowing. Figure 1

Figure 1: Performance comparison across Bird, Spider, and Math datasets showing degradation with increasing task diversity from the training dataset.

Introducing Diversity-Preserving Hybrid RL

DPH-RL surveys alternative divergences to foster diverse solution generation. Mass-covering divergences like forward-KL force the RL agent to revisit diverse solution paths in its training data. Forward-KL's design calculates divergence against the initial policy, effectively integrating an "anchor dataset" that preserves original knowledge and mitigates catastrophic forgetting, thus enhancing both Pass@1 and Pass@k scores.

Methodology and Implementation

The implementation of DPH-RL occurs in two stages. Initially, a pre-sampling stage develops static datasets to distinguish between mastered and challenging tasks, followed by an online training stage that applies forward-KL and JS divergence penalties to ensure robust exploration and knowledge retention. This two-pronged strategy uses generator functions to maintain diversity without requiring online reference models. Figure 2

Figure 2: Reinforcement learning training with base models and multiple solution styles for SQL problem-solving.

Empirical Validation

Extensive experimentation with models like Llama and Qwen over SQL and mathematics tasks illustrates DPH-RL's ability to effectively address the diversity collapse issue. Notably, in SQL tasks, Pass@8 scores for DPH-RL surpass baseline models significantly, demonstrating robust diversity across training and OOD test sets—a challenge for prior methods like GRPO and DAPO. Figure 3

Figure 3: Comparative analysis of preservation and exploration in RL-tuned models vs. base models using Llama.

Conclusions

DPH-RL stands as a pivotal development in RLVR implementation, leveraging f-divergences to maintain solution diversity and mitigate knowledge decay. This innovative divergence application addresses entropy collapse issues and enhances out-of-domain task performance without needing external refinement from stronger reasoning models, thus marking a significant stride in developing generalized reasoning platforms. Future research directions include refining divergence measure selection to further optimize diversity and performance balance in RLVR.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube