Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lagrangian-based online safe reinforcement learning for state-constrained systems (2305.12967v2)

Published 22 May 2023 in eess.SY and cs.SY

Abstract: This paper proposes a safe reinforcement learning (RL) algorithm that approximately solves the state-constrained optimal control problem for continuous-time uncertain nonlinear systems. We formulate the safe RL problem as the minimization of a Lagrangian that includes the cost functional and a user-defined barrier Lyapunov function (BLF) encoding the state constraints. We show that the analytical solution obtained by the application of Karush-Kuhn-Tucker (KKT) conditions contains a state-dependent expression for the Lagrange multiplier, which is a function of uncertain terms in the system dynamics. We argue that a naive estimation of the Lagrange multiplier may lead to safety constraint violations. To obviate this challenge, we propose an Actor-Critic-Identifier-Lagrangian (ACIL) algorithm that learns optimal control policies from online data without compromising safety. We provide safety and boundedness guarantees with the proposed algorithm and compare its performance with existing offline/online RL methods via a simulation study.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Soutrik Bandyopadhyay (4 papers)
  2. Shubhendu Bhasin (24 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.