Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proximal Gradient Descent-Ascent: Variable Convergence under KŁ Geometry (2102.04653v2)

Published 9 Feb 2021 in math.OC and cs.LG

Abstract: The gradient descent-ascent (GDA) algorithm has been widely applied to solve minimax optimization problems. In order to achieve convergent policy parameters for minimax optimization, it is important that GDA generates convergent variable sequences rather than convergent sequences of function values or gradient norms. However, the variable convergence of GDA has been proved only under convexity geometries, and there lacks understanding for general nonconvex minimax optimization. This paper fills such a gap by studying the convergence of a more general proximal-GDA for regularized nonconvex-strongly-concave minimax optimization. Specifically, we show that proximal-GDA admits a novel Lyapunov function, which monotonically decreases in the minimax optimization process and drives the variable sequence to a critical point. By leveraging this Lyapunov function and the K{\L} geometry that parameterizes the local geometries of general nonconvex functions, we formally establish the variable convergence of proximal-GDA to a critical point $x*$, i.e., $x_t\to x*, y_t\to y(x^)$. Furthermore, over the full spectrum of the K{\L}-parameterized geometry, we show that proximal-GDA achieves different types of convergence rates ranging from sublinear convergence up to finite-step convergence, depending on the geometry associated with the K{\L} parameter. This is the first theoretical result on the variable convergence for nonconvex minimax optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ziyi Chen (37 papers)
  2. Yi Zhou (438 papers)
  3. Tengyu Xu (27 papers)
  4. Yingbin Liang (140 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.