Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Understanding the Influence of Reward Margin on Preference Model Performance (2404.04932v1)

Published 7 Apr 2024 in cs.CL and cs.AI

Abstract: Reinforcement Learning from Human Feedback (RLHF) is a widely used framework for the training of LLMs. However, the process of using RLHF to develop a LLM that is well-aligned presents challenges, especially when it comes to optimizing the reward model. Our research has found that existing reward models, when trained using the traditional ranking objective based on human preference data, often struggle to effectively distinguish between responses that are more or less favorable in real-world scenarios. To bridge this gap, our study introduces a novel method to estimate the preference differences without the need for detailed, exhaustive labels from human annotators. Our experimental results provide empirical evidence that incorporating margin values into the training process significantly improves the effectiveness of reward models. This comparative analysis not only demonstrates the superiority of our approach in terms of reward prediction accuracy but also highlights its effectiveness in practical applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bowen Qin (16 papers)
  2. Duanyu Feng (13 papers)
  3. Xi Yang (160 papers)
Citations (2)