Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning (2007.02832v1)

Published 6 Jul 2020 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intrinsic goals that maximize the entropy of the historical achieved goal distribution. We propose to optimize this objective by having the agent pursue past achieved goals in sparsely explored areas of the goal space, which focuses exploration on the frontier of the achievable goal set. We show that our strategy achieves an order of magnitude better sample efficiency than the prior state of the art on long-horizon multi-goal tasks including maze navigation and block stacking.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Silviu Pitis (14 papers)
  2. Harris Chan (13 papers)
  3. Stephen Zhao (3 papers)
  4. Bradly Stadie (6 papers)
  5. Jimmy Ba (55 papers)
Citations (112)

Summary

We haven't generated a summary for this paper yet.