Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy (2007.06738v1)

Published 13 Jul 2020 in cs.LG and stat.ML

Abstract: We provide a detailed asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over "diagonal linear networks". This is the simplest model displaying a transition between "kernel" and non-kernel ("rich" or "active") regimes. We show how the transition is controlled by the relationship between the initialization scale and how accurately we minimize the training loss. Our results indicate that some limit behaviors of gradient descent only kick in at ridiculous training accuracies (well beyond $10{-100}$). Moreover, the implicit bias at reasonable initialization scales and training accuracies is more complex and not captured by these limits.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Edward Moroshko (15 papers)
  2. Suriya Gunasekar (34 papers)
  3. Blake Woodworth (30 papers)
  4. Jason D. Lee (151 papers)
  5. Nathan Srebro (145 papers)
  6. Daniel Soudry (76 papers)
Citations (83)

Summary

We haven't generated a summary for this paper yet.