Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Action Guidance: Getting the Best of Sparse Rewards and Shaped Rewards for Real-time Strategy Games (2010.03956v1)

Published 5 Oct 2020 in cs.LG and stat.ML

Abstract: Training agents using Reinforcement Learning in games with sparse rewards is a challenging problem, since large amounts of exploration are required to retrieve even the first reward. To tackle this problem, a common approach is to use reward shaping to help exploration. However, an important drawback of reward shaping is that agents sometimes learn to optimize the shaped reward instead of the true objective. In this paper, we present a novel technique that we call action guidance that successfully trains agents to eventually optimize the true objective in games with sparse rewards while maintaining most of the sample efficiency that comes with reward shaping. We evaluate our approach in a simplified real-time strategy (RTS) game simulator called $\mu$RTS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Shengyi Huang (16 papers)
  2. Santiago Ontañón (28 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.