Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerated Primal-Dual Proximal Gradient Splitting Methods for Convex-Concave Saddle-Point Problems (2407.20195v3)

Published 29 Jul 2024 in math.OC, cs.NA, and math.NA

Abstract: In this paper, based a novel primal-dual dynamical model with adaptive scaling parameters and Bregman divergences, we propose new accelerated primal-dual proximal gradient splitting methods for solving bilinear saddle-point problems with provable optimal nonergodic convergence rates. For the first, using the spectral analysis, we show that a naive extension of acceleration model for unconstrained optimization problems to a quadratic game is unstable. Motivated by this, we present an accelerated primal-dual hybrid gradient (APDHG) flow which combines acceleration with careful velocity correction. To work with non-Euclidean distances, we also equip our APDHG model with general Bregman divergences and prove the exponential decay of a Lyapunov function. Then, new primal-dual splitting methods are developed based on proper semi-implicit Euler schemes of the continuous model, and the theoretical convergence rates are nonergodic and optimal with respect to the matrix norms,\, Lipschitz constants and convexity parameters. Thanks to the primal and dual scaling parameters, both the algorithm designing and convergence analysis cover automatically the convex and (partially) strongly convex objectives. Moreover, the use of Bregman divergences not only unifies the standard Euclidean distances and general cases in an elegant way, but also makes our methods more flexible and adaptive to problem-dependent metrics.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com