Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the geometry of generalization and memorization in deep neural networks (2105.14602v1)

Published 30 May 2021 in cs.LG, cond-mat.dis-nn, and stat.ML

Abstract: Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance. To examine the structure of when and where memorization occurs in a deep network, we use a recently developed replica-based mean field theoretic geometric analysis method. We find that all layers preferentially learn from examples which share features, and link this behavior to generalization performance. Memorization predominately occurs in the deeper layers, due to decreasing object manifolds' radius and dimension, whereas early layers are minimally affected. This predicts that generalization can be restored by reverting the final few layer weights to earlier epochs before significant memorization occurred, which is confirmed by the experiments. Additionally, by studying generalization under different model sizes, we reveal the connection between the double descent phenomenon and the underlying model geometry. Finally, analytical analysis shows that networks avoid memorization early in training because close to initialization, the gradient contribution from permuted examples are small. These findings provide quantitative evidence for the structure of memorization across layers of a deep neural network, the drivers for such structure, and its connection to manifold geometric properties.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Cory Stephenson (14 papers)
  2. Suchismita Padhy (3 papers)
  3. Abhinav Ganesh (3 papers)
  4. Yue Hui (2 papers)
  5. Hanlin Tang (34 papers)
  6. SueYeon Chung (30 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.