Adaptive Reinforcement Learning for State Avoidance in Discrete Event Systems (2503.00192v3)
Abstract: Reinforcement learning (RL) has emerged as a potent paradigm for autonomous decision-making in complex environments. However, the integration of event-driven decision processes within RL remains a challenge. This paper presents a novel architecture that combines a Discrete Event Supervisory (DES) model with a standard RL framework to create a hybrid decision-making system. Our model leverages the DES's capabilities in managing event-based dynamics with the RL agent's adaptability to continuous states and actions, facilitating a more robust and flexible control strategy in systems characterized by both continuous and discrete events. The DES model operates alongside the RL agent, enhancing the policy's performance with event-based insights, while the environment's state transitions are governed by a mechanistic model. We demonstrate the efficacy of our approach through simulations that show improved performance metrics over traditional RL implementations. Our results suggest that this integrated approach holds promise for applications ranging from industrial automation to intelligent traffic systems, where discrete event handling is paramount.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.