Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Off-Policy Risk-Sensitive Reinforcement Learning Based Constrained Robust Optimal Control (2006.05681v6)

Published 10 Jun 2020 in eess.SY and cs.SY

Abstract: This paper proposes an off-policy risk-sensitive reinforcement learning based control framework for stabilization of a continuous-time nonlinear system that subjects to additive disturbances, input saturation, and state constraints. By introducing pseudo controls and risk-sensitive input and state penalty terms, the constrained robust stabilization problem of the original system is converted into an equivalent optimal control problem of an auxiliary system. Then, aiming at the transformed optimal control problem, we adopt adaptive dynamic programming (ADP) implemented as a single critic structure to get the approximate solution to the value function of the Hamilton-Jacobi-BeLLMan (HJB) equation, which results in the approximate optimal control policy that is able to satisfy both input and state constraints under disturbances. By replaying experience data to the off-policy weight update law of the critic artificial neural network, the weight convergence is guaranteed. Moreover, to get experience data to achieve a sufficient excitation required for the weight convergence, online and offline algorithms are developed to serve as principled ways to record informative experience data. The equivalence proof demonstrates that the optimal control strategy of the auxiliary system robustly stabilizes the original system without violating input and state constraints. The proofs of system stability and weight convergence are provided. Simulation results reveal the validity of the proposed control framework.

Citations (6)

Summary

We haven't generated a summary for this paper yet.