Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exact convergence rate of the last iterate in subgradient methods (2307.11134v1)

Published 20 Jul 2023 in math.OC

Abstract: We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each iteration. Using this technique, we obtain the exact worst-case convergence rate for the objective accuracy of the last iterate of the projected subgradient method with either constant step sizes or constant step lengths. Tightness is shown with a worst-case instance matching the established convergence rate. We also derive the value of the optimal constant step size when performing $N$ iterations, for which we find that the last iterate accuracy is smaller than $B R \sqrt{1+\log(N)/4}/{\sqrt{N+1}}$ %$\frac{B R \log N}{\sqrt{N+1}}$ , where $B$ is a bound on the subgradient norm and $R$ is a bound on the distance between the initial iterate and a minimizer. Finally, we introduce a new optimal subgradient method that achieves the best possible last-iterate accuracy after a given number $N$ of iterations. Its convergence rate ${B R}/{\sqrt{N+1}}$ matches exactly the lower bound on the performance of any black-box method on the considered problem class. We also show that there is no universal sequence of step sizes that simultaneously achieves this optimal rate at each iteration, meaning that the dependence of the step size sequence in $N$ is unavoidable.

Citations (45)

Summary

We haven't generated a summary for this paper yet.