Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Laws for Reward Model Overoptimization (2210.10760v1)

Published 19 Oct 2022 in cs.LG and stat.ML
Scaling Laws for Reward Model Overoptimization

Abstract: In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart's law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed "gold-standard" reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-$n$ sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment.

Scaling Laws for Reward Model Overoptimization

The paper "Scaling Laws for Reward Model Overoptimization" investigates the phenomenon of overoptimization in reinforcement learning from human feedback (RLHF). In this paradigm, a reward model (RM) is trained to predict human preferences and is subsequently utilized to optimize policy models. The authors take particular interest in the observed decline in true performance when optimizing against imperfect reward models, aligning with Goodhart's law. They establish scaling laws that describe how this overoptimization varies with the method of optimization and the scale of model parameters.

Methodology Overview

To quantify the effect of overoptimization without incurring the prohibitive costs associated with human labeling, the researchers use a synthetic setup. A large, fixed "gold-standard" reward model generates labels in place of human feedback, allowing them to paper optimization and overoptimization in a controlled environment. Two optimization techniques are compared: policy gradient-based reinforcement learning (RL) and best-of-nn sampling (BoN).

The researchers explore the relationships between the gold reward model score and the Kullback-Leibler (KL) divergence from the initial policy to the optimized policy. Specifically, they identify different functional forms depending on the optimization technique:

  • For BoN sampling: Rbon(d)=d(αbonβbond)R_{\text{bo}n}(d) = d(\alpha_{\text{bo}n} - \beta_{\text{bo}n}d)
  • For RL: RRL(d)=d(αRLβRLlogd)R_{\text{RL}}(d) = d(\alpha_{\text{RL}} - \beta_{\text{RL}}\log{d})

These forms are parameterized by α\alpha and β\beta coefficients, which depend on the dimensions of the reward model and the dataset size.

Key Findings and Implications

Optimization Method Differences

The paper delineates significant distinctions between RL and BoN methods:

  1. Efficiency in KL Divergence: RL consumes more KL divergence than BoN to achieve equivalent optimization and overoptimization. This identifies KL as a suboptimal metric for comparing across methods, given RL's inherent less efficient use of KL distance.
  2. Proxy vs. Gold Score Relationship: Despite RL consuming more KL, the proxy reward model scores show that RL and BoN follow similar trends, hinting at underlying commonalities that could be explored further for better optimization insights.

Scaling of Reward Model and Policy Parameters

Two primary scaling laws emerge from the analysis:

  1. Smooth Coefficient Scaling: The parameters αbon\alpha_{\text{bo}n}, βbon\beta_{\text{bo}n}, and βRL\beta_{\text{RL}} show logarithmic trends as the reward model size increases, allowing scalable prediction of gold RM scores.
  2. Policy Size Robustness: Larger models showed less benefit from optimization but did not significantly differ in overoptimization behaviors. This finding suggests robustness to overoptimization isn't inherently tied to policy size, which could have implications for model scaling approaches in RLHF.

KL Penalty Ineffectiveness

Applying a KL penalty in RL did not show measurable benefits in terms of gold RM score improvement. This counterintuitive result suggests that KL penalties may not always align with the intuitive expectation of mitigating overoptimization.

Theoretical Considerations

The authors discuss the implications of their findings within the framework of Goodhart's law, particularly focusing on three forms:

  1. Regressional Goodhart: The natural noise in proxy rewards results in regressional Goodhart, which aligns with their model of the α\alpha term.
  2. Extremal Goodhart: The likelihood of proxy models to fail out-of-distribution increases with optimization, explained by the β\beta term's influence, showing non-linear and often detrimental reward shifts.
  3. Causal Goodhart: Correlations without causal efficacy between features and rewards can still detriment performance, akin to regressional issues but layered with misleading causation signals.

Implications for AI Alignment and Future Research

From an AI alignment perspective, these findings underscore the necessity for robust reward models and caution against unchecked optimization practices. Understanding the bounds and characteristics of overoptimization is paramount for developing safe and effective AI systems, especially in scalable and human-aligned contexts.

Several avenues for future research are suggested:

  • Exploring other optimization methodologies besides RL and BoN.
  • Extending the synthetic setup validations to real-world scenarios.
  • Investigating the implications of iterative reinforcement learning from human feedback to refine reward model projections.
  • Continually updating and exploring adversarial Goodhart potential as models become more complex.

This paper provides a substantive foundation for understanding the intricacies of reward model overoptimization and proposes scalable approaches to mitigate its adverse effects, benefiting the broader research community focused on AI alignment and optimization strategies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Leo Gao (16 papers)
  2. John Schulman (43 papers)
  3. Jacob Hilton (18 papers)
Citations (362)
Youtube Logo Streamline Icon: https://streamlinehq.com