Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards (1907.10247v3)

Published 24 Jul 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow. Recent work demonstrated that using a memory buffer of previous successful trajectories can result in more effective policies. However, existing methods may overly exploit past successful experiences, which can encourage the agent to adopt sub-optimal and myopic behaviors. In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer. Our method allows the agent to reach diverse regions in the state space and improve upon the past trajectories to reach new states. We empirically show that our approach significantly outperforms count-based exploration methods (parametric approach) and self-imitation learning (parametric approach with non-parametric memory) on various complex tasks with local optima. In particular, without using expert demonstrations or resetting to arbitrary states, we achieve the state-of-the-art scores under five billion number of frames, on challenging Atari games such as Montezuma's Revenge and Pitfall.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yijie Guo (31 papers)
  2. Jongwook Choi (16 papers)
  3. Marcin Moczulski (9 papers)
  4. Shengyu Feng (14 papers)
  5. Samy Bengio (75 papers)
  6. Mohammad Norouzi (81 papers)
  7. Honglak Lee (174 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.