Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Connecting the Dots Between MLE and RL for Sequence Prediction (1811.09740v2)

Published 24 Nov 2018 in cs.LG, cs.AI, cs.CL, and stat.ML

Abstract: Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms such as RAML, SPG, and data noising, have also been developed from different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently distinct algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of a reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, inspired from the framework, we present a new algorithm that dynamically interpolates among the family of algorithms for scheduled sequence model learning. Experiments on machine translation, text summarization, and game imitation learning demonstrate the superiority of the proposed algorithm.

Citations (24)

Summary

We haven't generated a summary for this paper yet.