Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks (2108.11489v3)

Published 25 Aug 2021 in stat.ML, cs.LG, math.ST, and stat.TH

Abstract: The recent success of neural network models has shone light on a rather surprising statistical phenomenon: statistical models that perfectly fit noisy data can generalize well to unseen test data. Understanding this phenomenon of $\textit{benign overfitting}$ has attracted intense theoretical and empirical study. In this paper, we consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk when the covariates satisfy sub-Gaussianity and anti-concentration properties, and the noise is independent and sub-Gaussian. By leveraging recent results that characterize the implicit bias of this estimator, our bounds emphasize the role of both the quality of the initialization as well as the properties of the data covariance matrix in achieving low excess risk.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Niladri S. Chatterji (21 papers)
  2. Philip M. Long (27 papers)
  3. Peter L. Bartlett (86 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.