Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-Stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates (1903.05282v6)

Published 13 Mar 2019 in math.OC

Abstract: In this paper, we propose two novel non-stationary first-order primal-dual algorithms to solve nonsmooth composite convex optimization problems. Unlike existing primal-dual schemes where the parameters are often fixed, our methods use pre-defined and dynamic sequences for parameters. We prove that our first algorithm can achieve $\mathcal{O}(1/k)$ convergence rate on the primal-dual gap, and primal and dual objective residuals, where $k$ is the iteration counter. Our rate is on the non-ergodic (i.e., the last iterate) sequence of the primal problem and on the ergodic (i.e., the averaging) sequence of the dual problem, which we call semi-ergodic rate. By modifying the step-size update rule, this rate can be boosted even faster on the primal objective residual. When the problem is strongly convex, we develop a second primal-dual algorithm that exhibits $\mathcal{O}(1/k2)$ convergence rate on the same three types of guarantees. Again by modifying the step-size update rule, this rate becomes faster on the primal objective residual. Our primal-dual algorithms are the first ones to achieve such fast convergence rate guarantees under mild assumptions compared to existing works, to the best of our knowledge. As byproducts, we apply our algorithms to solve constrained convex optimization problems and prove the same convergence rates on both the objective residuals and the feasibility violation. We still obtain at least $\mathcal{O}(1/k2)$ rates even when the problem is "semi-strongly" convex. We verify our theoretical results via two well-known numerical examples.

Summary

We haven't generated a summary for this paper yet.