Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adam$^+$: A Stochastic Method with Adaptive Variance Reduction (2011.11985v1)

Published 24 Nov 2020 in cs.LG and math.OC

Abstract: Adam is a widely used stochastic optimization method for deep learning applications. While practitioners prefer Adam because it requires less parameter tuning, its use is problematic from a theoretical point of view since it may not converge. Variants of Adam have been proposed with provable convergence guarantee, but they tend not be competitive with Adam on the practical performance. In this paper, we propose a new method named Adam$+$ (pronounced as Adam-plus). Adam$+$ retains some of the key components of Adam but it also has several noticeable differences: (i) it does not maintain the moving average of second moment estimate but instead computes the moving average of first moment estimate at extrapolated points; (ii) its adaptive step size is formed not by dividing the square root of second moment estimate but instead by dividing the root of the norm of first moment estimate. As a result, Adam$+$ requires few parameter tuning, as Adam, but it enjoys a provable convergence guarantee. Our analysis further shows that Adam$+$ enjoys adaptive variance reduction, i.e., the variance of the stochastic gradient estimator reduces as the algorithm converges, hence enjoying an adaptive convergence. We also propose a more general variant of Adam$+$ with different adaptive step sizes and establish their fast convergence rate. Our empirical studies on various deep learning tasks, including image classification, LLMing, and automatic speech recognition, demonstrate that Adam$+$ significantly outperforms Adam and achieves comparable performance with best-tuned SGD and momentum SGD.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mingrui Liu (44 papers)
  2. Wei Zhang (1489 papers)
  3. Francesco Orabona (62 papers)
  4. Tianbao Yang (162 papers)
Citations (25)