Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backplay: "Man muss immer umkehren" (1807.06919v5)

Published 18 Jul 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Model-free reinforcement learning (RL) requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to improve the sample efficiency when we have access to demonstrations. Our approach, Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment's fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency. This includes reward shaping, behavioral cloning, and reverse curriculum generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Cinjon Resnick (11 papers)
  2. Roberta Raileanu (41 papers)
  3. Sanyam Kapoor (15 papers)
  4. Alexander Peysakhovich (22 papers)
  5. Kyunghyun Cho (292 papers)
  6. Joan Bruna (119 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.