Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prior Constraints-based Reward Model Training for Aligning Large Language Models (2404.00978v2)

Published 1 Apr 2024 in cs.CL

Abstract: Reinforcement learning with human feedback for aligning LLMs trains a reward model typically using ranking loss with comparison pairs.However, the training procedure suffers from an inherent problem: the uncontrolled scaling of reward scores during reinforcement learning due to the lack of constraints while training the reward model.This paper proposes a Prior Constraints-based Reward Model (namely PCRM) training method to mitigate this problem. PCRM incorporates prior constraints, specifically, length ratio and cosine similarity between outputs of each comparison pair, during reward model training to regulate optimization magnitude and control score margins. We comprehensively evaluate PCRM by examining its rank correlation with human preferences and its effectiveness in aligning LLMs via RL. Experimental results demonstrate that PCRM significantly improves alignment performance by effectively constraining reward score scaling. As another bonus, our method is easily integrated into arbitrary rank-based alignment methods, such as direct preference optimization, and can yield consistent improvement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hang Zhou (166 papers)
  2. Chenglong Wang (80 papers)
  3. Yimin Hu (6 papers)
  4. Tong Xiao (119 papers)
  5. Chunliang Zhang (12 papers)
  6. Jingbo Zhu (79 papers)
Citations (1)