Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Adam: A Twofold Exponential Moving Average Approach (2106.11514v3)

Published 22 Jun 2021 in cs.LG and cs.AI

Abstract: Adaptive gradient methods, e.g. \textsc{Adam}, have achieved tremendous success in machine learning. Scaling the learning rate element-wisely by a certain form of second moment estimate of gradients, such methods are able to attain rapid training of modern deep neural networks. Nevertheless, they are observed to suffer from compromised generalization ability compared with stochastic gradient descent (\textsc{SGD}) and tend to be trapped in local minima at an early stage during training. Intriguingly, we discover that substituting the gradient in the second raw moment estimate term with its momentumized version in \textsc{Adam} can resolve the issue. The intuition is that gradient with momentum contains more accurate directional information and therefore its second moment estimation is a more favorable option for learning rate scaling than that of the raw gradient. Thereby we propose \textsc{AdaMomentum} as a new optimizer reaching the goal of training fast while generalizing much better. We further develop a theory to back up the improvement in generalization and provide convergence guarantees under both convex and nonconvex settings. Extensive experiments on a wide range of tasks and models demonstrate that \textsc{AdaMomentum} exhibits state-of-the-art performance and superior training stability consistently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yizhou Wang (162 papers)
  2. Yue Kang (12 papers)
  3. Can Qin (37 papers)
  4. Huan Wang (211 papers)
  5. Yi Xu (302 papers)
  6. Yulun Zhang (167 papers)
  7. Yun Fu (131 papers)
Citations (6)