Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Kernel and Rich Regimes in Overparametrized Models (1906.05827v3)

Published 13 Jun 2019 in cs.LG and stat.ML

Abstract: A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Blake Woodworth (30 papers)
  2. Suriya Gunasekar (34 papers)
  3. Pedro Savarese (14 papers)
  4. Edward Moroshko (15 papers)
  5. Itay Golan (5 papers)
  6. Jason Lee (33 papers)
  7. Daniel Soudry (76 papers)
  8. Nathan Srebro (145 papers)

Summary

We haven't generated a summary for this paper yet.