Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Low-Rank Simplicity Bias in Deep Networks (2103.10427v4)

Published 18 Mar 2021 in cs.LG and cs.CV

Abstract: Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations that investigate and extend the hypothesis that deeper networks are inductively biased to find solutions with lower effective rank embeddings. We conjecture that this bias exists because the volume of functions that maps to low effective rank embedding increases with depth. We show empirically that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well. We then show that the simplicity bias exists at both initialization and after training and is resilient to hyper-parameters and learning methods. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance on CIFAR and ImageNet without changing the modeling capacity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Minyoung Huh (10 papers)
  2. Hossein Mobahi (24 papers)
  3. Richard Zhang (61 papers)
  4. Brian Cheung (24 papers)
  5. Pulkit Agrawal (103 papers)
  6. Phillip Isola (84 papers)
Citations (106)

Summary

We haven't generated a summary for this paper yet.