Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Golden Ratio Primal-Dual Algorithm with Two New Stepsize Rules for Convex-Concave Saddle Point Problems (2502.17918v1)

Published 25 Feb 2025 in math.OC

Abstract: In this paper, we present two stepsize rules for the extended Golden Ratio primal-dual algorithm (E-GRPDA) designed to address structured convex optimization problems in finite-dimensional real Hilbert spaces. The first rule features a nonincreasing primal stepsize that remains bounded below by a positive constant and is updated adaptively at each iteration, eliminating the need for the Lipschitz constant of the gradient of the function and the norm of the operator involved. The second stepsize rule is adaptive, adjusting based on the local smoothness of the smooth component function and the local estimate of the norm of the operator. In other words, we present an adaptive version of the E-GRPDA algorithm. Importantly, both methods avoid the use of backtracking to estimate the operator norm. We prove that E-GRPDA achieves an ergodic sublinear convergence rate with both stepsize rules, evaluated using the primal-dual gap function. Additionally, we establish an R-linear convergence rate for E-GRPDA with the first stepsize rule, under some standard assumptions and with appropriately chosen parameters. Through numerical experiments on various convex optimization problems, we demonstrate the effectiveness of our approaches and compare their performance to the existing ones.

Summary

We haven't generated a summary for this paper yet.