Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reward Gaming in Conditional Text Generation (2211.08714v3)

Published 16 Nov 2022 in cs.CL, cs.AI, and cs.LG

Abstract: To align conditional text generation model outputs with desired behaviors, there has been an increasing focus on training the model using reinforcement learning (RL) with reward functions learned from human annotations. Under this framework, we identify three common cases where high rewards are incorrectly assigned to undesirable patterns: noise-induced spurious correlation, naturally occurring spurious correlation, and covariate shift. We show that even though learned metrics achieve high performance on the distribution of the data used to train the reward function, the undesirable patterns may be amplified during RL training of the text generation model. While there has been discussion about reward gaming in the RL or safety community, in this discussion piece, we would like to highlight reward gaming in the natural language generation (NLG) community using concrete conditional text generation examples and discuss potential fixes and areas for future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Richard Yuanzhe Pang (26 papers)
  2. Vishakh Padmakumar (22 papers)
  3. Thibault Sellam (19 papers)
  4. Ankur P. Parikh (28 papers)
  5. He He (71 papers)
Citations (21)