Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes (2212.12147v1)

Published 23 Dec 2022 in stat.ML and cs.LG

Abstract: For small training set sizes $P$, the generalization error of wide neural networks is well-approximated by the error of an infinite width neural network (NN), either in the kernel or mean-field/feature-learning regime. However, after a critical sample size $P*$, we empirically find the finite-width network generalization becomes worse than that of the infinite width network. In this work, we empirically study the transition from infinite-width behavior to this variance limited regime as a function of sample size $P$ and network width $N$. We find that finite-size effects can become relevant for very small dataset sizes on the order of $P* \sim \sqrt{N}$ for polynomial regression with ReLU networks. We discuss the source of these effects using an argument based on the variance of the NN's final neural tangent kernel (NTK). This transition can be pushed to larger $P$ by enhancing feature learning or by ensemble averaging the networks. We find that the learning curve for regression with the final NTK is an accurate approximation of the NN learning curve. Using this, we provide a toy model which also exhibits $P* \sim \sqrt{N}$ scaling and has $P$-dependent benefits from feature learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alexander Atanasov (14 papers)
  2. Blake Bordelon (27 papers)
  3. Sabarish Sainathan (2 papers)
  4. Cengiz Pehlevan (81 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.