Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Ensembling For Mitigating Reward Overoptimisation (2406.01013v2)

Published 3 Jun 2024 in cs.LG and cs.CL

Abstract: Reinforcement Learning from Human Feedback (RLHF) has enabled significant advancements within LLMing for powerful, instruction-following models. However, the alignment of these models remains a pressing challenge as the policy tends to overfit the learned proxy" reward model past an inflection point of utility as measured by agold" reward model that is more performant -- a phenomenon known as overoptimisation. Prior work has mitigated this issue by computing a pessimistic statistic over an ensemble of reward models, which is common in Offline Reinforcement Learning but incredibly costly for LLMs with high memory requirements, making such approaches infeasible for sufficiently large models. To this end, we propose using a shared encoder but separate linear heads. We find this leads to similar performance as the full ensemble while allowing tremendous savings in memory and time required for training for models of similar size.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ahmed M. Ahmed (5 papers)
  2. Rafael Rafailov (37 papers)
  3. Stepan Sharkov (1 paper)
  4. Xuechen Li (35 papers)
  5. Sanmi Koyejo (110 papers)
Citations (1)