Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Supplementary Condition for the Convergence of the Control Policy during Adaptive Dynamic Programming (1803.07743v4)

Published 21 Mar 2018 in math.OC

Abstract: Reinforcement learning based adaptive/approximate dynamic programming (ADP) is a powerful technique to determine an approximate optimal controller for a dynamical system. These methods bypass the need to analytically solve the nonlinear Hamilton-Jacobi-BeLLMan equation, whose solution is often to difficult to determine but is needed to determine the optimal control policy. ADP methods usually employ a policy iteration algorithm that evaluates and improves a value function at every step to find the optimal control policy. Previous works in ADP have been lacking a stronger condition that ensures the convergence of the policy iteration algorithm. This paper provides a sufficient but not necessary condition that guarantees the convergence of an ADP algorithm. This condition may provide a more solid theoretical framework for ADP-based control algorithm design for nonlinear dynamical systems.

Summary

We haven't generated a summary for this paper yet.