Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning (2405.19548v1)

Published 29 May 2024 in cs.LG

Abstract: Extrinsic rewards can effectively guide reinforcement learning (RL) agents in specific tasks. However, extrinsic rewards frequently fall short in complex environments due to the significant human effort needed for their design and annotation. This limitation underscores the necessity for intrinsic rewards, which offer auxiliary and dense signals and can enable agents to learn in an unsupervised manner. Although various intrinsic reward formulations have been proposed, their implementation and optimization details are insufficiently explored and lack standardization, thereby hindering research progress. To address this gap, we introduce RLeXplore, a unified, highly modularized, and plug-and-play framework offering reliable implementations of eight state-of-the-art intrinsic reward algorithms. Furthermore, we conduct an in-depth study that identifies critical implementation details and establishes well-justified standard practices in intrinsically-motivated RL. The source code for RLeXplore is available at https://github.com/RLE-Foundation/RLeXplore.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mingqi Yuan (16 papers)
  2. Roger Creus Castanyer (7 papers)
  3. Bo Li (1107 papers)
  4. Xin Jin (285 papers)
  5. Glen Berseth (48 papers)
  6. Wenjun Zeng (130 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets