Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The alignment property of SGD noise and how it helps select flat minima: A stability analysis (2207.02628v3)

Published 6 Jul 2022 in stat.ML and cs.LG

Abstract: The phenomenon that stochastic gradient descent (SGD) favors flat minima has played a critical role in understanding the implicit regularization of SGD. In this paper, we provide an explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018). Specifically, we consider training over-parameterized models with square loss. We prove that if a global minimum $\theta*$ is linearly stable for SGD, then it must satisfy $|H(\theta*)|_F\leq O(\sqrt{B}/\eta)$, where $|H(\theta*)|_F, B,\eta$ denote the Frobenius norm of Hessian at $\theta*$, batch size, and learning rate, respectively. Otherwise, SGD will escape from that minimum \emph{exponentially} fast. Hence, for minima accessible to SGD, the sharpness -- as measured by the Frobenius norm of the Hessian -- is bounded \emph{independently} of the model size and sample size. The key to obtaining these results is exploiting the particular structure of SGD noise: The noise concentrates in sharp directions of local landscape and the magnitude is proportional to loss value. This alignment property of SGD noise provably holds for linear networks and random feature models (RFMs), and is empirically verified for nonlinear networks. Moreover, the validity and practical relevance of our theoretical findings are also justified by extensive experiments on CIFAR-10 dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Lei Wu (319 papers)
  2. Mingze Wang (21 papers)
  3. Weijie Su (37 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.