Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Edge-of-Stability Training Dynamics with a Minimalist Example (2210.03294v2)

Published 7 Oct 2022 in cs.LG, math.OC, and stat.ML

Abstract: Recently, researchers observed that gradient descent for deep neural networks operates in an ``edge-of-stability'' (EoS) regime: the sharpness (maximum eigenvalue of the Hessian) is often larger than stability threshold $2/\eta$ (where $\eta$ is the step size). Despite this, the loss oscillates and converges in the long run, and the sharpness at the end is just slightly below $2/\eta$. While many other well-understood nonconvex objectives such as matrix factorization or two-layer networks can also converge despite large sharpness, there is often a larger gap between sharpness of the endpoint and $2/\eta$. In this paper, we study EoS phenomenon by constructing a simple function that has the same behavior. We give rigorous analysis for its training dynamics in a large local region and explain why the final converging point has sharpness close to $2/\eta$. Globally we observe that the training dynamics for our example has an interesting bifurcating behavior, which was also observed in the training of neural nets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xingyu Zhu (34 papers)
  2. Zixuan Wang (83 papers)
  3. Xiang Wang (279 papers)
  4. Mo Zhou (45 papers)
  5. Rong Ge (92 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.