Differential Dynamic Programming for Nonlinear Dynamic Games (1809.08302v1)
Abstract: Dynamic games arise when multiple agents with differing objectives choose control inputs to a dynamic system. Dynamic games model a wide variety of applications in economics, defense, and energy systems. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only very specialized dynamic games can be solved exactly, and so approximation algorithms are required. This paper extends the differential dynamic programming algorithm from single-agent control to the case of non-zero sum full-information dynamic games. The method works by computing quadratic approximations to the dynamic programming equations. The approximation results in static quadratic games which are solved recursively. Convergence is proved by showing that the algorithm iterates sufficiently close to iterates of Newton's method to inherit its convergence properties. A numerical example is provided.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.