Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions (2004.07584v2)

Published 16 Apr 2020 in eess.SY, cs.LG, cs.RO, and cs.SY

Abstract: In this paper, the issue of model uncertainty in safety-critical control is addressed with a data-driven approach. For this purpose, we utilize the structure of an input-ouput linearization controller based on a nominal model along with a Control Barrier Function and Control Lyapunov Function based Quadratic Program (CBF-CLF-QP). Specifically, we propose a novel reinforcement learning framework which learns the model uncertainty present in the CBF and CLF constraints, as well as other control-affine dynamic constraints in the quadratic program. The trained policy is combined with the nominal model-based CBF-CLF-QP, resulting in the Reinforcement Learning-based CBF-CLF-QP (RL-CBF-CLF-QP), which addresses the problem of model uncertainty in the safety constraints. The performance of the proposed method is validated by testing it on an underactuated nonlinear bipedal robot walking on randomly spaced stepping stones with one step preview, obtaining stable and safe walking under model uncertainty.

Citations (174)

Summary

  • The paper introduces the RL-CBF-CLF-QP framework that unifies reinforcement learning with classical control methods to manage model uncertainties.
  • The paper employs an RL agent to learn and compensate for uncertain dynamics affecting CLF and CBF constraints in safety-critical tasks.
  • The framework demonstrates robust performance on bipedal robots, achieving stable and safe walking under variable system parameters.

Reinforcement Learning for Safety-Critical Control under Model Uncertainty

This paper presents a methodological approach to address model uncertainty in safety-critical control systems using reinforcement learning (RL). The framework leverages the structural benefits of Control Lyapunov Functions (CLFs) and Control Barrier Functions (CBFs) within a Quadratic Program (QP) to ensure stability and safety in dynamical systems with uncertain models. Specifically, the authors propose an innovative RL-based framework termed RL-CBF-CLF-QP, which integrates learning into the control design process to handle uncertainties impacting both CLF and CBF constraints, along with other dynamic constraints.

The paper focuses on a data-driven approach where an RL agent is trained to estimate and compensate for model uncertainties directly affecting the safety-critical control tasks managed by CBFs and CLFs. The framework adapts the nominal model-based CBF-CLF-QP to incorporate learned uncertainties, thereby enhancing safety guarantees and performance consistency during execution.

Key Components and Contributions

  1. Unified Reinforcement Learning Framework: The RL-CBF-CLF-QP framework couples RL with classical control methods, unifying the learning processes for model uncertainties in CLF and CBF constraints. This approach facilitates the simultaneous handling of safety and stability, leveraging the RL agent to learn a policy that minimizes the estimation errors related to system dynamics.
  2. Estimation of Uncertain Terms: The paper formulates a strategy where the RL agent learns approximation models for uncertain terms affecting the CLF and CBF constraints. This not only aids in accurate compensation for model mismatch but also ensures the constraints reflect true system dynamics.
  3. Application to Bipedal Robots: The authors validate their RL framework on an underactuated nonlinear hybrid system—a bipedal robot—demonstrating walking tasks on randomly spaced stepping stones. The proposed method achieves stable and safe walking performance, addressing significant model uncertainties and demonstrating robustness to variations in system parameters such as mass and inertia.

Practical Implications and Theoretical Insights

The framework integrates reinforcement learning into control systems design, offering a promising avenue for adaptive management of safety-critical constraints. Practically, this introduction of RL enables complex robotic systems, particularly those with high degrees of freedom and substantial model uncertainty, to operate more reliably and safely. The ability to learn and adapt online offers significant advantages for systems facing dynamically varying conditions—such as robots navigating uncertain or hostile environments.

Theoretically, this approach builds on the robust foundations of CLF and CBF methods, enriching them with learning-based tools to manage uncertainty effectively. It opens discussions on the application of RL in safety-critical scenarios, encouraging further exploration of hybrid learning-control approaches where adaptive learning plays a critical role in systems design.

Future Directions

Future research may explore extending this RL-based approach to real-world scenarios and more diverse application domains, such as autonomous vehicles or aircraft, where safety and robustness under uncertainty are paramount. Additionally, examining the balance between training efficiency and model accuracy could lead to further improvements in system responsiveness and adaptability. The intersection of RL and formal control strategies offers abundant potential for innovation in the management of complex and uncertain systems.