Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chi-square Loss for Softmax: an Echo of Neural Network Structure (2108.13822v1)

Published 31 Aug 2021 in cs.LG and cs.AI

Abstract: Softmax working with cross-entropy is widely used in classification, which evaluates the similarity between two discrete distribution columns (predictions and true labels). Inspired by chi-square test, we designed a new loss function called chi-square loss, which is also works for Softmax. Chi-square loss has a statistical background. We proved that it is unbiased in optimization, and clarified its using conditions (its formula determines that it must work with label smoothing). In addition, we studied the sample distribution of this loss function by visualization and found that the distribution is related to the neural network structure, which is distinct compared to cross-entropy. In the past, the influence of structure was often ignored when visualizing. Chi-square loss can notice changes in neural network structure because it is very strict, and we explained the reason for this strictness. We also studied the influence of label smoothing and discussed the relationship between label smoothing and training accuracy and stability. Since the chi-square loss is very strict, the performance will degrade when dealing samples of very many classes.

Citations (1)

Summary

We haven't generated a summary for this paper yet.