Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Unconstrained Layer-Peeled Perspective on Neural Collapse (2110.02796v2)

Published 6 Oct 2021 in cs.LG and stat.ML

Abstract: Neural collapse is a highly symmetric geometric pattern of neural networks that emerges during the terminal phase of training, with profound implications on the generalization performance and robustness of the trained networks. To understand how the last-layer features and classifiers exhibit this recently discovered implicit bias, in this paper, we introduce a surrogate model called the unconstrained layer-peeled model (ULPM). We prove that gradient flow on this model converges to critical points of a minimum-norm separation problem exhibiting neural collapse in its global minimizer. Moreover, we show that the ULPM with the cross-entropy loss has a benign global landscape for its loss function, which allows us to prove that all the critical points are strict saddle points except the global minimizers that exhibit the neural collapse phenomenon. Empirically, we show that our results also hold during the training of neural networks in real-world tasks when explicit regularization or weight decay is not used.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wenlong Ji (12 papers)
  2. Yiping Lu (32 papers)
  3. Yiliang Zhang (10 papers)
  4. Zhun Deng (38 papers)
  5. Weijie J. Su (70 papers)
Citations (80)

Summary

We haven't generated a summary for this paper yet.