Thorough investigation of Bayes-risk cross-entropy and Type II maximum likelihood losses for evidential classification

Investigate thoroughly the Bayes risk with cross-entropy loss and the Type II Maximum Likelihood loss for training evidential classification models, specifically to analyze and compare these two losses as used in evidential deep learning classification.

Background

The paper studies different evidential loss functions for classification and reports that the Bayes risk with sum of squares (MSE) loss is theoretically bounded and empirically sub-optimal. The authors therefore focus on the remaining two losses: Bayes risk with cross-entropy loss and Type II Maximum Likelihood loss.

In experiments on MNIST and CIFAR100, the two losses (cross-entropy-based and Type II Maximum Likelihood) exhibit comparable empirical performance, and the authors proceed with Type II Maximum Likelihood primarily for simplicity and certain theoretical conveniences. They explicitly note that a comprehensive analysis comparing these two losses is deferred.

References

We leave a thorough investigation of these two evidential losses (eqn:evDigammaloss {content} eqn:evLogloss) as future work.

Learn to Accumulate Evidence from All Training Samples: Theory and Practice  (2306.11113 - Pandey et al., 2023) in Section: Impact of loss function (main text)