Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient Norm Minimization of Nesterov Acceleration: $o(1/k^3)$ (2209.08862v1)

Published 19 Sep 2022 in math.OC, cs.LG, cs.NA, and math.NA

Abstract: In the history of first-order algorithms, Nesterov's accelerated gradient descent (NAG) is one of the milestones. However, the cause of the acceleration has been a mystery for a long time. It has not been revealed with the existence of gradient correction until the high-resolution differential equation framework proposed in [Shi et al., 2021]. In this paper, we continue to investigate the acceleration phenomenon. First, we provide a significantly simplified proof based on precise observation and a tighter inequality for $L$-smooth functions. Then, a new implicit-velocity high-resolution differential equation framework, as well as the corresponding implicit-velocity version of phase-space representation and Lyapunov function, is proposed to investigate the convergence behavior of the iterative sequence ${x_k}{k=0}{\infty}$ of NAG. Furthermore, from two kinds of phase-space representations, we find that the role played by gradient correction is equivalent to that by velocity included implicitly in the gradient, where the only difference comes from the iterative sequence ${y{k}}{k=0}{\infty}$ replaced by ${x_k}{k=0}{\infty}$. Finally, for the open question of whether the gradient norm minimization of NAG has a faster rate $o(1/k3)$, we figure out a positive answer with its proof. Meanwhile, a faster rate of objective value minimization $o(1/k2)$ is shown for the case $r > 2$.

Citations (13)

Summary

We haven't generated a summary for this paper yet.