Papers
Topics
Authors
Recent
Search
2000 character limit reached

Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

Published 8 Oct 2023 in cs.CL | (2310.05199v5)

Abstract: Reinforcement learning from human feedback serves as a crucial bridge, aligning LLMs with human and societal values. This alignment requires a vast corpus of human feedback to learn a reward model, which is subsequently used to finetune LLMs. However, we have identified that the reward model often finds shortcuts to bypass its intended objectives, misleadingly assuming that humans prefer longer responses. The emergence of length bias often induces the model to favor longer outputs, yet it doesn't equate to an increase in helpful information within these outputs. In this paper, we propose an innovative solution, applying the Product-of-Experts (PoE) technique to separate reward modeling from the influence of sequence length. In our framework, the main expert concentrates on understanding human intents, while the biased expert targets the identification and capture of length bias. To further enhance the learning of bias, we introduce perturbations into the bias-focused expert, disrupting the flow of semantic information. Experimental results validate the effectiveness of our approach, indicating that LLM performance is improved, irrespective of sequence length.

Citations (33)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.