Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lagrangian-based online safe reinforcement learning for state-constrained systems

Published 22 May 2023 in eess.SY and cs.SY | (2305.12967v2)

Abstract: This paper proposes a safe reinforcement learning (RL) algorithm that approximately solves the state-constrained optimal control problem for continuous-time uncertain nonlinear systems. We formulate the safe RL problem as the minimization of a Lagrangian that includes the cost functional and a user-defined barrier Lyapunov function (BLF) encoding the state constraints. We show that the analytical solution obtained by the application of Karush-Kuhn-Tucker (KKT) conditions contains a state-dependent expression for the Lagrange multiplier, which is a function of uncertain terms in the system dynamics. We argue that a naive estimation of the Lagrange multiplier may lead to safety constraint violations. To obviate this challenge, we propose an Actor-Critic-Identifier-Lagrangian (ACIL) algorithm that learns optimal control policies from online data without compromising safety. We provide safety and boundedness guarantees with the proposed algorithm and compare its performance with existing offline/online RL methods via a simulation study.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.