Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teacher Forcing Recovers Reward Functions for Text Generation (2210.08708v2)

Published 17 Oct 2022 in cs.LG, cs.AI, and cs.CL

Abstract: Reinforcement learning (RL) has been widely used in text generation to alleviate the exposure bias issue or to utilize non-parallel datasets. The reward function plays an important role in making RL training successful. However, previous reward functions are typically task-specific and sparse, restricting the use of RL. In our work, we propose a task-agnostic approach that derives a step-wise reward function directly from a model trained with teacher forcing. We additionally propose a simple modification to stabilize the RL training on non-parallel datasets with our induced reward function. Empirical results show that our method outperforms self-training and reward regression methods on several text generation tasks, confirming the effectiveness of our reward function.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yongchang Hao (11 papers)
  2. Yuxin Liu (53 papers)
  3. Lili Mou (79 papers)
Citations (7)
X Twitter Logo Streamline Icon: https://streamlinehq.com