Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Exploring Starts by Kernel Density Estimation-Based State-Space Coverage Acceleration in Reinforcement Learning (2105.08990v2)

Published 19 May 2021 in cs.LG, cs.SY, and eess.SY

Abstract: Reinforcement learning (RL) is currently a popular research topic in control engineering and has the potential to make its way to industrial and commercial applications. Corresponding RL controllers are trained in direct interaction with the controlled system, rendering them data-driven and performance-oriented solutions. The best practice of exploring starts (ES) is used by default to support the learning process via randomly picked initial states. However, this method might deliver strongly biased results if the system's dynamic and constraints lead to unfavorable sample distributions in the state space (e.g., condensed sample accumulation in certain state-space areas). To overcome this issue, a kernel density estimation-based state-space coverage acceleration (DESSCA) is proposed, which improves the ES concept by prioritizing infrequently visited states for a more balanced coverage of the state space during training. Compared to neighbouring methods in the field of count-based exploration, DESSCA can also be applied to continuous state spaces without the need for artificial discretization of the states. Moreover, the algorithm allows to define arbitrary reference state distributions such that the state coverage can be shaped w.r.t. the application needs. Considered test scenarios are mountain car, cartpole and electric motor control environments. Using DQN and DDPG as exemplary RL algorithms, it can be shown that DESSCA is a simple yet effective algorithmic extension to the established ES approach that enables an increase in learning stability as well as the final control performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Maximilian Schenke (3 papers)
  2. Oliver Wallscheid (13 papers)
Citations (5)