Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Demystifying the Generalization Behaviors When Neural Collapse Emerges (2310.08358v1)

Published 12 Oct 2023 in cs.LG

Abstract: Neural Collapse (NC) is a well-known phenomenon of deep neural networks in the terminal phase of training (TPT). It is characterized by the collapse of features and classifier into a symmetrical structure, known as simplex equiangular tight frame (ETF). While there have been extensive studies on optimization characteristics showing the global optimality of neural collapse, little research has been done on the generalization behaviors during the occurrence of NC. Particularly, the important phenomenon of generalization improvement during TPT has been remaining in an empirical observation and lacking rigorous theoretical explanation. In this paper, we establish the connection between the minimization of CE and a multi-class SVM during TPT, and then derive a multi-class margin generalization bound, which provides a theoretical explanation for why continuing training can still lead to accuracy improvement on test set, even after the train accuracy has reached 100%. Additionally, our further theoretical results indicate that different alignment between labels and features in a simplex ETF can result in varying degrees of generalization improvement, despite all models reaching NC and demonstrating similar optimization performance on train set. We refer to this newly discovered property as "non-conservative generalization". In experiments, we also provide empirical observations to verify the indications suggested by our theoretical results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Peifeng Gao (3 papers)
  2. Qianqian Xu (74 papers)
  3. Yibo Yang (80 papers)
  4. Peisong Wen (10 papers)
  5. Huiyang Shao (4 papers)
  6. Zhiyong Yang (43 papers)
  7. Bernard Ghanem (256 papers)
  8. Qingming Huang (168 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.