Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sample Efficiency in Sparse Reinforcement Learning: Or Your Money Back (2008.12693v1)

Published 28 Aug 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Sparse rewards present a difficult problem in reinforcement learning and may be inevitable in certain domains with complex dynamics such as real-world robotics. Hindsight Experience Replay (HER) is a recent replay memory development that allows agents to learn in sparse settings by altering memories to show them as successful even though they may not be. While, empirically, HER has shown some success, it does not provide guarantees around the makeup of samples drawn from an agent's replay memory. This may result in minibatches that contain only memories with zero-valued rewards or agents learning an undesirable policy that completes HER-adjusted goals instead of the actual goal. In this paper, we introduce Or Your Money Back (OYMB), a replay memory sampler designed to work with HER. OYMB improves training efficiency in sparse settings by providing a direct interface to the agent's replay memory that allows for control over minibatch makeup, as well as a preferential lookup scheme that prioritizes real-goal memories before HER-adjusted memories. We test our approach on five tasks across three unique environments. Our results show that using HER in combination with OYMB outperforms using HER alone and leads to agents that learn to complete the real goal more quickly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Trevor A. McInroe (2 papers)