Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Novel Policy Iteration Algorithm for Nonlinear Continuous-Time H$\infty$ Control Problem (2401.13014v1)

Published 23 Jan 2024 in eess.SY and cs.SY

Abstract: H{\infty} control of nonlinear continuous-time system depends on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which has been proved impossible to obtain a closed-form solution due to the nonlinearity of HJI equation. In order to solve HJI equation, many iterative algorithms were proposed, and most of the algorithms were essentially Newton method when the fixed-point equation was constructed in a Banach space. Newton method is a local optimization method, it has small convergence region and needs the initial guess to be sufficiently close to the solution. Whereas damped Newton method enhances the robustness with respect to initial condition and has larger convergence region. In this paper, a novel reinforcement learning method which is named {\alpha}-policy iteration ({\alpha}-PI) is introduced for solving HJI equation. First, by constructing a damped Newton iteration operator equation, a generalized BeLLMan equation (GBE) is obtained. The GBE is an extension of beLLMan equation. And then, by iterating on the GBE, an on-policy {\alpha}-PI reinforcement learning method without using knowledge regarding to the system internal dynamics is proposed. Third, based on the on-policy {\alpha}-PI reinforcement learning method, we develop an off-policy {\alpha}-PI reinforcement learning method without requiring any knowledge of the system dynamics. Finally, the neural-network based adaptive critic implementation schemes of on-policy and off-policy {\alpha}-PI algorithms are derived respectively, and the batch least-squares method is used for calculating the weight parameters of neural networks. The effectiveness of the off-policy {\alpha}-PI algorithm is verified through computer simulation.

Summary

We haven't generated a summary for this paper yet.