Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs (2006.07037v1)

Published 12 Jun 2020 in math.OC and cs.LG

Abstract: Adaptive gradient methods have attracted much attention of machine learning communities due to the high efficiency. However their acceleration effect in practice, especially in neural network training, is hard to analyze, theoretically. The huge gap between theoretical convergence results and practical performances prevents further understanding of existing optimizers and the development of more advanced optimization methods. In this paper, we provide adaptive gradient methods a novel analysis with an additional mild assumption, and revise AdaGrad to \radagrad for matching a better provable convergence rate. To find an $\epsilon$-approximate first-order stationary point in non-convex objectives, we prove random shuffling \radagrad achieves a $\tilde{O}(T{-1/2})$ convergence rate, which is significantly improved by factors $\tilde{O}(T{-1/4})$ and $\tilde{O}(T{-1/6})$ compared with existing adaptive gradient methods and random shuffling SGD, respectively. To the best of our knowledge, it is the first time to demonstrate that adaptive gradient methods can deterministically be faster than SGD after finite epochs. Furthermore, we conduct comprehensive experiments to validate the additional mild assumption and the acceleration effect benefited from second moments and random shuffling.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xunpeng Huang (14 papers)
  2. Hao Zhou (351 papers)
  3. Runxin Xu (30 papers)
  4. Zhe Wang (574 papers)
  5. Lei Li (1293 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.