Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stability of Gradient Learning Dynamics in Continuous Games: Vector Action Spaces (2011.05562v2)

Published 7 Nov 2020 in cs.GT, cs.SY, and eess.SY

Abstract: Towards characterizing the optimization landscape of games, this paper analyzes the stability of gradient-based dynamics near fixed points of two-player continuous games. We introduce the quadratic numerical range as a method to characterize the spectrum of game dynamics and prove the robustness of equilibria to variations in learning rates. By decomposing the game Jacobian into symmetric and skew-symmetric components, we assess the contribution of a vector field's potential and rotational components to the stability of differential Nash equilibria. Our results show that in zero-sum games, all Nash are stable and robust; in potential games, all stable points are Nash. For general-sum games, we provide a sufficient condition for instability. We conclude with a numerical example in which learning with timescale separation results in faster convergence.

Summary

We haven't generated a summary for this paper yet.