2000 character limit reached
Neighboring state-based RL Exploration (2212.10712v2)
Published 21 Dec 2022 in cs.LG and cs.AI
Abstract: Reinforcement Learning is a powerful tool to model decision-making processes. However, it relies on an exploration-exploitation trade-off that remains an open challenge for many tasks. In this work, we study neighboring state-based, model-free exploration led by the intuition that, for an early-stage agent, considering actions derived from a bounded region of nearby states may lead to better actions when exploring. We propose two algorithms that choose exploratory actions based on a survey of nearby states, and find that one of our methods, ${\rho}$-explore, consistently outperforms the Double DQN baseline in an discrete environment by 49\% in terms of Eval Reward Return.
- Jeffery Cheng (1 paper)
- Kevin Li (59 papers)
- Justin Lin (10 papers)
- Pedro Pachuca (2 papers)