Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global Convergence of Policy Gradient for Sequential Zero-Sum Linear Quadratic Dynamic Games (1911.04672v1)

Published 12 Nov 2019 in eess.SY, cs.SY, and math.OC

Abstract: We propose projection-free sequential algorithms for linear-quadratic dynamics games. These policy gradient based algorithms are akin to Stackelberg leadership model and can be extended to model-free settings. We show that if the leader performs natural gradient descent/ascent, then the proposed algorithm has a global sublinear convergence to the Nash equilibrium. Moreover, if the leader adopts a quasi-Newton policy, the algorithm enjoys a global quadratic convergence. Along the way, we examine and clarify the intricacies of adopting sequential policy updates for LQ games, namely, issues pertaining to stabilization, indefinite cost structure, and circumventing projection steps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jingjing Bu (8 papers)
  2. Lillian J. Ratliff (59 papers)
  3. Mehran Mesbahi (68 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.