Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Limited Generalization Capability of the Implicit Reward Model Induced by Direct Preference Optimization (2409.03650v2)

Published 5 Sep 2024 in cs.LG and cs.CL

Abstract: Reinforcement Learning from Human Feedback (RLHF) is an effective approach for aligning LLMs to human preferences. Central to RLHF is learning a reward function for scoring human preferences. Two main approaches for learning a reward model are 1) training an EXplicit Reward Model (EXRM) as in RLHF, and 2) using an implicit reward learned from preference data through methods such as Direct Preference Optimization (DPO). Prior work has shown that the implicit reward model of DPO (denoted as DPORM) can approximate an EXRM in the limit. DPORM's effectiveness directly implies the optimality of the learned policy, and also has practical implication for LLM alignment methods including iterative DPO. However, it is unclear how well DPORM empirically matches the performance of EXRM. This work studies the accuracy at distinguishing preferred and rejected answers for both DPORM and EXRM. Our findings indicate that even though DPORM fits the training dataset comparably, it generalizes less effectively than EXRM, especially when the validation datasets contain distribution shifts. Across five out-of-distribution settings, DPORM has a mean drop in accuracy of 3% and a maximum drop of 7%. These findings highlight that DPORM has limited generalization ability and substantiates the integration of an explicit reward model in iterative DPO approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yong Lin (77 papers)
  2. Skyler Seto (22 papers)
  3. Maartje ter Hoeve (21 papers)
  4. Katherine Metcalf (16 papers)
  5. Barry-John Theobald (34 papers)
  6. Xuan Wang (205 papers)
  7. Yizhe Zhang (127 papers)
  8. Chen Huang (88 papers)
  9. Tong Zhang (569 papers)
Citations (3)
Youtube Logo Streamline Icon: https://streamlinehq.com