Papers
Topics
Authors
Recent
Search
2000 character limit reached

An Adaptive and Parameter-Free Nesterov's Accelerated Gradient Method for Convex Optimization

Published 16 May 2025 in math.OC | (2505.11670v1)

Abstract: We propose AdaNAG, an adaptive accelerated gradient method based on Nesterov's accelerated gradient method. AdaNAG is line-search-free, parameter-free, and achieves the accelerated convergence rates $f(x_k) - f_\star = \mathcal{O}\left(1/k2\right)$ and $\min_{i\in\left{1,\dots, k\right}} |\nabla f(x_i)|2 = \mathcal{O}\left(1/k3\right)$ for $L$-smooth convex function $f$. We provide a Lyapunov analysis for the convergence proof of AdaNAG, which additionally enables us to propose a novel adaptive gradient descent (GD) method, AdaGD. AdaGD achieves the non-ergodic convergence rate $f(x_k) - f_\star = \mathcal{O}\left(1/k\right)$, like the original GD. The analysis of AdaGD also motivated us to propose a generalized AdaNAG that includes practically useful variants of AdaNAG. Numerical results demonstrate that our methods outperform some other recent adaptive methods for representative applications.

Authors (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 8 likes about this paper.