Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neighboring state-based RL Exploration (2212.10712v2)

Published 21 Dec 2022 in cs.LG and cs.AI

Abstract: Reinforcement Learning is a powerful tool to model decision-making processes. However, it relies on an exploration-exploitation trade-off that remains an open challenge for many tasks. In this work, we study neighboring state-based, model-free exploration led by the intuition that, for an early-stage agent, considering actions derived from a bounded region of nearby states may lead to better actions when exploring. We propose two algorithms that choose exploratory actions based on a survey of nearby states, and find that one of our methods, ${\rho}$-explore, consistently outperforms the Double DQN baseline in an discrete environment by 49\% in terms of Eval Reward Return.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jeffery Cheng (1 paper)
  2. Kevin Li (59 papers)
  3. Justin Lin (10 papers)
  4. Pedro Pachuca (2 papers)

Summary

We haven't generated a summary for this paper yet.