Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining (2504.07912v1)

Published 10 Apr 2025 in cs.LG

Abstract: Reinforcement learning (RL)-based fine-tuning has become a crucial step in post-training LLMs for advanced mathematical reasoning and coding. Following the success of frontier reasoning models, recent work has demonstrated that RL fine-tuning consistently improves performance, even in smaller-scale models; however, the underlying mechanisms driving these improvements are not well-understood. Understanding the effects of RL fine-tuning requires disentangling its interaction with pretraining data composition, hyperparameters, and model scale, but such problems are exacerbated by the lack of transparency regarding the training data used in many existing models. In this work, we present a systematic end-to-end study of RL fine-tuning for mathematical reasoning by training models entirely from scratch on different mixtures of fully open datasets. We investigate the effects of various RL fine-tuning algorithms (PPO, GRPO, and Expert Iteration) across models of different scales. Our study reveals that RL algorithms consistently converge towards a dominant output distribution, amplifying patterns in the pretraining data. We also find that models of different scales trained on the same data mixture will converge to distinct output distributions, suggesting that there are scale-dependent biases in model generalization. Moreover, we find that RL post-training on simpler questions can lead to performance gains on harder ones, indicating that certain reasoning capabilities generalize across tasks. Our findings show that small-scale proxies in controlled settings can elicit interesting insights regarding the role of RL in shaping LLM behavior.

Summary

  • The paper demonstrates that RL fine-tuning amplifies select behaviors from pretraining by converging on a dominant output format.
  • It details how the choice and ratio of instruction datasets rapidly steer model performance, significantly improving pass@1 accuracy while limiting diversity.
  • The study highlights practical strategies for controlling output style through data curation and fine-tuning, balancing accuracy gains and format diversity.

This paper, "Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining" (2504.07912), investigates how reinforcement learning (RL) fine-tuning interacts with the data used during pretraining to improve mathematical reasoning in LLMs. The core finding is that RL fine-tuning tends to significantly amplify one specific output format or style present in the pretraining data, effectively creating an "echo chamber" for that style, while suppressing others. This convergence often correlates with performance improvements but can also lead to reduced output diversity.

Core Idea: The "Echo Chamber" Effect

The central mechanism observed is that RL fine-tuning doesn't necessarily teach entirely new reasoning skills but rather selects and reinforces specific behavioral patterns learned during pretraining. When pretrained on a mixture of datasets with distinct output formats (e.g., Python code functions, code blocks with specific tags, natural language explanations with LaTeX), the model, after RL fine-tuning, predominantly generates outputs matching just one of those formats.

Experimental Setup (for Implementation)

To paper this phenomenon in a controlled way, the authors trained models entirely from scratch, allowing full transparency into the training data.

  1. Pretraining:
    • Architecture: Standard decoder-only Transformers (based on OLMo) in 150M and 1B parameter sizes. Key features include SwiGLU activations and RoPE positional encodings.
    • Datasets: A base mixture of mathematical text corpora (FineMath-3+, Algebraic-Stack) combined with varying ratios of instruction-following datasets:
      • TinyGSM: Problems with Python code solutions within a specific function structure (def simple_math_problem(): ... return result).
      • OpenMathInstruct1 (OMI1): Problems with Python code solutions enclosed in <LLM-code> tags.
      • OpenMathInstruct2 (OMI2): Problems with natural language solutions, often using LaTeX, ending in a boxed answer.
    • Method: Instruction datasets were added to the pretraining corpus by concatenating prompt and answer, without special chat templates. Ratios and repetitions (e.g., 4 x TinyGSM) were varied across experiments.
    • Hyperparameters: AdamW optimizer (LR=0.001, WD=0.1), linear warmup (5000 steps), cosine decay.
  2. RL Fine-tuning:
    • Algorithms: Proximal Policy Optimization (PPO), Group Relative Policy Optimization (GRPO) using the OpenRLHF library, and Expert Iteration (EI).
    • Reward: Verifiable reward: +1 if the generated answer is numerically correct, 0 otherwise.
    • Fine-tuning Data: Primarily used questions from the GSM8K training set.
    • Evaluation: Performance (pass@1, pass@64, majority@64) and output format distribution were tracked on the GSM8K test set during RL training. Transfer was tested on MATH-500 and AIME datasets.
    • Key RL Hyperparameters: (See Table~\ref{table:hyperparams_ppo} in the paper for PPO/GRPO details) e.g., Actor LR=1×1061 \times 10^{-6}, Critic LR=7×1067 \times 10^{-6}, KL coefficient (varied: 0, 0.001, 0.01). For EI (Table~\ref{table:hyperparams_ei}), k=64k=64 samples generated per problem, SFT LR=1×1041 \times 10^{-4}.

Key Findings and Implementation Considerations

  • Pretraining Mix Drives Post-RL Behavior: The choice and proportion of instruction datasets in the pretraining mix heavily influence which output format the model converges to after RL. If you want code outputs, including a performant code-based instruction set like TinyGSM is crucial. If you prefer natural language, OMI2 is more relevant.
  • Rapid Convergence: The shift towards a dominant format happens quickly during RL, often within the first epoch. This suggests RL rapidly identifies and exploits the most rewarding pattern learned previously.
  • Performance vs. Diversity Trade-off: Convergence to one format often boosts pass@1 accuracy significantly. However, it typically reduces output diversity (lower pass@k / majority@k improvements or even degradation) as the model explores fewer solution styles. The KL divergence penalty in PPO/GRPO can mitigate this: higher KL retains more format diversity, though the impact on final pass@1 was sometimes minimal in these experiments. Notably, setting KL=0 performed similarly to KL=0.001.
  • Scale Matters: Model scale influences the preferred format.
    • 150M models: Tended to prefer the simpler, structured code format of TinyGSM.
    • 1B models: Showed a greater tendency to converge towards the natural language format of OMI2, even if it wasn't the most performant format initially. This implies larger models might have a bias towards or better capacity for natural language reasoning styles, provided they are exposed to them in pretraining.
  • Potential Failure Mode: RL doesn't always pick the "best" initial format. In some cases (e.g., Figure~\ref{fig:rl_failure_case}(b)), the model converged to a format that was less accurate at initialization and resulted in lower final performance compared to converging to a different format under slightly different pretraining conditions. Careful monitoring of performance during RL is needed.
  • Qualitative Refinement: RL also refines outputs within the chosen format (e.g., making docstring formatting more consistent in TinyGSM outputs, Figure~\ref{fig:tinygsm_only}). This suggests RL optimizes the generation process for the preferred style.
  • Positive Transfer: Fine-tuning on a simpler dataset (GSM8K) can improve performance on harder, related datasets (MATH-500), especially if the pretraining data included examples structurally similar to the harder task (e.g., OMI1/OMI2 derived from MATH). This indicates RL enhances general reasoning capabilities relevant to the amplified format, not just surface-level mimicry. Transfer to AIME was less pronounced. Qualitative analysis suggests improvements come from fixing logic flaws and misinterpretations, not just arithmetic.
  • Algorithm Choice: PPO provided stable convergence. GRPO showed similar trends but was less stable, sometimes experiencing performance collapses before recovery. Expert Iteration (EI) significantly underperformed PPO/GRPO in this setup and showed much slower convergence to a dominant format, potentially due to restarting SFT from the base model each iteration.

Practical Applications

  • Controlling Output Style: Design the pretraining data mixture strategically to guide the model towards a desired output format post-RL (e.g., Python functions, LaTeX explanations, specific XML-like structures).
  • Optimizing for Accuracy vs. Diversity: Tune the KL penalty during RL to balance convergence to a high-accuracy format against maintaining a diversity of solution approaches. Consider removing the KL penalty if pass@1 is the primary goal in similar reasoning tasks.
  • Small-Scale Prototyping: Use smaller models (like 150M) and controlled pretraining/RL experiments to gain insights into these dynamics before scaling up, keeping in mind potential scale-dependent behaviors.
  • Data Curation: Repeating high-quality data from a preferred format (like TinyGSM in the paper's examples) during pretraining can lead to larger gains from RL fine-tuning compared to adding more diverse, lower-quality datasets.

Tracking Output Formats (Example)

To replicate the analysis, you would need to classify generated outputs based on format-specific patterns:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def classify_output_format(generation_text):
    if "def simple_math_problem():" in generation_text:
        return "TinyGSM"
    elif "<LLM-code>" in generation_text and "</LLM-code>" in generation_text:
        return "OMI1"
    # Add more checks for other formats if needed
    else:
        # Assume natural language / OMI2 if no other code format matches
        # Might need more sophisticated checks (e.g., presence of LaTeX, boxed answer)
        is_likely_code = False # Add checks for general Python code if necessary
        if not is_likely_code:
             return "Text/OMI2"
        else:
             return "Other Code/Unknown"

This paper provides strong evidence that RL fine-tuning acts largely as an amplifier for behaviors learned during pretraining. Designing the pretraining data and understanding its interaction with RL is therefore critical for controlling and optimizing LLMs for tasks like mathematical reasoning.