Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer (2405.16436v3)
Abstract: Aligning generative models with human preference via RLHF typically suffers from overoptimization, where an imperfectly learned reward model can misguide the generative model to output undesired responses. We investigate this problem in a principled manner by identifying the source of the misalignment as a form of distributional shift and uncertainty in learning human preferences. To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model; one that simultaneously minimizes the maximum likelihood estimation of the loss and a reward penalty term. Here, the reward penalty term is introduced to prevent the policy from choosing actions with spurious high proxy rewards, resulting in provable sample efficiency of the algorithm under a partial coverage style condition. Moving from theory to practice, the proposed algorithm further enjoys an equivalent but surprisingly easy-to-implement reformulation. Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines: (i) a preference optimization loss that directly aligns the policy with human preference, and (ii) a supervised learning loss that explicitly imitates the policy with a (suitable) baseline distribution. In the context of aligning LLMs (LLM), this objective fuses the direct preference optimization (DPO) loss with the supervised fine-tuning (SFT) loss to help mitigate the overoptimization towards undesired responses, for which we name the algorithm Regularized Preference Optimization (RPO). Experiments of aligning LLMs demonstrate the improved performance of RPO compared with DPO baselines. Our work sheds light on the interplay between preference optimization and SFT in tuning LLMs with both theoretical guarantees and empirical evidence.
- Gpt-4 technical report. arXiv preprint arXiv:2303.08774 .
- Anthropic (2023). Introducing claude. https://www.anthropic.com/news/introducing-claude .
- argill (2024). argilla-dpo-mix-7k. https://huggingface.co/datasets/argilla/dpo-mix-7k.
- A general theoretical paradigm to understand learning from human preferences. arXiv preprint arXiv:2310.12036 .
- Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 .
- Preference-based online learning with dueling bandits: A survey. Journal of Machine Learning Research 22 1–108.
- Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika 39 324–345.
- Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21).
- Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217 .
- Human-in-the-loop: Provably efficient preference-based reinforcement learning with general function approximation. In Proceedings of the 39th International Conference on Machine Learning (K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu and S. Sabato, eds.), vol. 162 of Proceedings of Machine Learning Research. PMLR.
- Deep reinforcement learning from human preferences. Advances in neural information processing systems 30.
- Reward model ensembles help mitigate overoptimization. arXiv preprint arXiv:2310.02743 .
- Ultrafeedback: Boosting language models with high-quality feedback. arXiv preprint arXiv:2310.01377 .
- Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233 .
- Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767 .
- Exploration-driven policy optimization in rlhf: Theoretical insights on efficient data utilization. arXiv preprint arXiv:2402.10342 .
- Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475 .
- Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244 .
- Implementation matters in deep policy gradients: A case study on ppo and trpo. arXiv preprint arXiv:2005.12729 .
- Fan, K. (1953). Minimax theorems. Proceedings of the National Academy of Sciences 39 42–47.
- Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 .
- Scaling laws for reward model overoptimization. In International Conference on Machine Learning. PMLR.
- Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462 .
- Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691 .
- Towards efficient and exact optimization of language model alignment. arXiv preprint arXiv:2402.00856 .
- Reinforcement learning with human feedback: Learning dynamic choices via pessimism. arXiv preprint arXiv:2305.18438 .
- Robust preference optimization with provable noise tolerance for llms. arXiv preprint arXiv:2404.04102 .
- Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657 .
- What makes good data for alignment? a comprehensive study of automatic data selection in instruction tuning. arXiv preprint arXiv:2312.15685 .
- Maximize to explore: One objective function fusing estimation, planning, and exploration. Advances in Neural Information Processing Systems 36.
- Understanding learned reward functions. arXiv preprint arXiv:2012.05862 .
- Confronting reward model overoptimization with constrained rlhf. arXiv preprint arXiv:2310.04373 .
- Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 27730–27744.
- Dueling rl: reinforcement learning with trajectory preferences. arXiv preprint arXiv:2111.04850 .
- Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228 .
- From r𝑟ritalic_r to q⋆superscript𝑞⋆q^{\star}italic_q start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT: Your language model is secretly a q-function. arXiv preprint arXiv:2404.12358 .
- Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems 36.
- Countering reward over-optimization in llm with demonstration-guided reinforcement learning. arXiv preprint arXiv:2404.19409 .
- Direct nash optimization: Teaching language models to self-improve with general preferences. arXiv preprint arXiv:2404.03715 .
- Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 .
- Preference fine-tuning of llms should leverage suboptimal, on-policy data. arXiv preprint arXiv:2404.14367 .
- Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749 .
- Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 .
- Causal confusion and reward misidentification in preference-based reward learning. arXiv preprint arXiv:2204.06601 .
- The alignment handbook. https://github.com/huggingface/alignment-handbook.
- Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944 .
- Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 .
- Is rlhf more difficult than standard rl? a theoretical perspective. Advances in Neural Information Processing Systems 36.
- Self-play preference optimization for language model alignment. arXiv preprint arXiv:2405.00675 .
- Gibbs sampling from human feedback: A provable kl-constrained framework for rlhf. arXiv preprint arXiv:2312.11456 .
- A theoretical analysis of nash learning from human feedback under general kl-regularized preference. arXiv preprint arXiv:2402.07314 .
- The k-armed dueling bandits problem. Journal of Computer and System Sciences 78 1538–1556.
- Provable offline preference-based reinforcement learning. In The Twelfth International Conference on Learning Representations.
- How to query human feedback efficiently in rl? arXiv preprint arXiv:2305.18505 .
- Negative preference optimization: From catastrophic collapse to effective unlearning. arXiv preprint arXiv:2404.05868 .
- Zhang, T. (2023). Mathematical analysis of machine learning algorithms. Cambridge University Press.
- Overcoming reward overoptimization via adversarial policy optimization with lightweight uncertainty estimation. arXiv preprint arXiv:2403.05171 .
- Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425 .
- Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems 36.
- Dpo meets ppo: Reinforced token optimization for rlhf. arXiv preprint arXiv:2404.18922 .
- Principled reinforcement learning with human feedback from pairwise or k𝑘kitalic_k-wise comparisons. arXiv preprint arXiv:2301.11270 .
- Iterative data smoothing: Mitigating reward overfitting and overoptimization in rlhf. arXiv preprint arXiv:2401.16335 .
- Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 .
- Zhihan Liu (22 papers)
- Miao Lu (13 papers)
- Shenao Zhang (16 papers)
- Boyi Liu (49 papers)
- Hongyi Guo (14 papers)
- Yingxiang Yang (14 papers)
- Jose Blanchet (143 papers)
- Zhaoran Wang (164 papers)