Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Momentum Change the Implicit Regularization on Separable Data? (2110.03891v2)

Published 8 Oct 2021 in cs.LG and math.OC

Abstract: The momentum acceleration technique is widely adopted in many optimization algorithms. However, there is no theoretical answer on how the momentum affects the generalization performance of the optimization algorithms. This paper studies this problem by analyzing the implicit regularization of momentum-based optimization. We prove that on the linear classification problem with separable data and exponential-tailed loss, gradient descent with momentum (GDM) converges to the L2 max-margin solution, which is the same as vanilla gradient descent. That means gradient descent with momentum acceleration still converges to a low-complexity model, which guarantees their generalization. We then analyze the stochastic and adaptive variants of GDM (i.e., SGDM and deterministic Adam) and show they also converge to the L2 max-margin solution. Technically, to overcome the difficulty of the error accumulation in analyzing the momentum, we construct new potential functions to analyze the gap between the model parameter and the max-margin solution. Numerical experiments are conducted and support our theoretical results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bohan Wang (42 papers)
  2. Qi Meng (50 papers)
  3. Huishuai Zhang (64 papers)
  4. Ruoyu Sun (70 papers)
  5. Wei Chen (1290 papers)
  6. Zhi-Ming Ma (56 papers)
  7. Tie-Yan Liu (242 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.