Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity (1802.01504v2)

Published 5 Feb 2018 in math.OC, cs.LG, and stat.ML

Abstract: We consider the convex-concave saddle point problem $\min_{x}\max_{y} f(x)+y\top A x-g(y)$ where $f$ is smooth and convex and $g$ is smooth and strongly convex. We prove that if the coupling matrix $A$ has full column rank, the vanilla primal-dual gradient method can achieve linear convergence even if $f$ is not strongly convex. Our result generalizes previous work which either requires $f$ and $g$ to be quadratic functions or requires proximal mappings for both $f$ and $g$. We adopt a novel analysis technique that in each iteration uses a "ghost" update as a reference, and show that the iterates in the primal-dual gradient method converge to this "ghost" sequence. Using the same technique we further give an analysis for the primal-dual stochastic variance reduced gradient (SVRG) method for convex-concave saddle point problems with a finite-sum structure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Simon S. Du (120 papers)
  2. Wei Hu (309 papers)
Citations (118)

Summary

We haven't generated a summary for this paper yet.