Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking (2406.03704v2)

Published 6 Jun 2024 in cs.LG, cs.SY, and eess.SY

Abstract: Continuous action spaces in reinforcement learning (RL) are commonly defined as multidimensional intervals. While intervals usually reflect the action boundaries for tasks well, they can be challenging for learning because the typically large global action space leads to frequent exploration of irrelevant actions. Yet, little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions. Focusing learning on these relevant actions can significantly improve training efficiency and effectiveness. In this paper, we propose to focus learning on the set of relevant actions and introduce three continuous action masking methods for exactly mapping the action space to the state-dependent set of relevant actions. Thus, our methods ensure that only relevant actions are executed, enhancing the predictability of the RL agent and enabling its use in safety-critical applications. We further derive the implications of the proposed methods on the policy gradient. Using proximal policy optimization (PPO), we evaluate our methods on four control tasks, where the relevant action set is computed based on the system dynamics and a relevant state set. Our experiments show that the three action masking methods achieve higher final rewards and converge faster than the baseline without action masking.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Roland Stolz (1 paper)
  2. Hanna Krasowski (10 papers)
  3. Jakob Thumm (8 papers)
  4. Michael Eichelbeck (6 papers)
  5. Philipp Gassert (4 papers)
  6. Matthias Althoff (66 papers)

Summary

We haven't generated a summary for this paper yet.