Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross Entropy versus Label Smoothing: A Neural Collapse Perspective (2402.03979v2)

Published 6 Feb 2024 in cs.LG

Abstract: Label smoothing loss is a widely adopted technique to mitigate overfitting in deep neural networks. This paper studies label smoothing from the perspective of Neural Collapse (NC), a powerful empirical and theoretical framework which characterizes model behavior during the terminal phase of training. We first show empirically that models trained with label smoothing converge faster to neural collapse solutions and attain a stronger level of neural collapse. Additionally, we show that at the same level of NC1, models under label smoothing loss exhibit intensified NC2. These findings provide valuable insights into the performance benefits and enhanced model calibration under label smoothing loss. We then leverage the unconstrained feature model to derive closed-form solutions for the global minimizers for both loss functions and further demonstrate that models under label smoothing have a lower conditioning number and, therefore, theoretically converge faster. Our study, combining empirical evidence and theoretical results, not only provides nuanced insights into the differences between label smoothing and cross-entropy losses, but also serves as an example of how the powerful neural collapse framework can be used to improve our understanding of DNNs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Li Guo (184 papers)
  2. Keith Ross (20 papers)
  3. Zifan Zhao (3 papers)
  4. George Andriopoulos (7 papers)
  5. Shuyang Ling (22 papers)
  6. Yufeng Xu (4 papers)
  7. Zixuan Dong (6 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.