Papers
Topics
Authors
Recent
2000 character limit reached

A Novel Policy Iteration Algorithm for Nonlinear Continuous-Time H$\infty$ Control Problem

Published 23 Jan 2024 in eess.SY and cs.SY | (2401.13014v1)

Abstract: H{\infty} control of nonlinear continuous-time system depends on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which has been proved impossible to obtain a closed-form solution due to the nonlinearity of HJI equation. In order to solve HJI equation, many iterative algorithms were proposed, and most of the algorithms were essentially Newton method when the fixed-point equation was constructed in a Banach space. Newton method is a local optimization method, it has small convergence region and needs the initial guess to be sufficiently close to the solution. Whereas damped Newton method enhances the robustness with respect to initial condition and has larger convergence region. In this paper, a novel reinforcement learning method which is named {\alpha}-policy iteration ({\alpha}-PI) is introduced for solving HJI equation. First, by constructing a damped Newton iteration operator equation, a generalized Bellman equation (GBE) is obtained. The GBE is an extension of bellman equation. And then, by iterating on the GBE, an on-policy {\alpha}-PI reinforcement learning method without using knowledge regarding to the system internal dynamics is proposed. Third, based on the on-policy {\alpha}-PI reinforcement learning method, we develop an off-policy {\alpha}-PI reinforcement learning method without requiring any knowledge of the system dynamics. Finally, the neural-network based adaptive critic implementation schemes of on-policy and off-policy {\alpha}-PI algorithms are derived respectively, and the batch least-squares method is used for calculating the weight parameters of neural networks. The effectiveness of the off-policy {\alpha}-PI algorithm is verified through computer simulation.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.