Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Second-Order Uncertainty Quantification: A Distance-Based Approach (2312.00995v1)

Published 2 Dec 2023 in cs.LG and stat.ML

Abstract: In the past couple of years, various approaches to representing and quantifying different types of predictive uncertainty in machine learning, notably in the setting of classification, have been proposed on the basis of second-order probability distributions, i.e., predictions in the form of distributions on probability distributions. A completely conclusive solution has not yet been found, however, as shown by recent criticisms of commonly used uncertainty measures associated with second-order distributions, identifying undesirable theoretical properties of these measures. In light of these criticisms, we propose a set of formal criteria that meaningful uncertainty measures for predictive uncertainty based on second-order distributions should obey. Moreover, we provide a general framework for developing uncertainty measures to account for these criteria, and offer an instantiation based on the Wasserstein distance, for which we prove that all criteria are satisfied.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Deep evidential regression. In Proc. NeurIPS, 33rd Advances in Neural Information Processing Systems, volume 33, pages 14927–14937.
  2. Introduction to Imprecise Probabilities. John Wiley & Sons.
  3. Pitfalls of epistemic uncertainty quantification through loss minimisation. In Proc. NeurIPS, 35th Advances in Neural Information Processing Systems, volume 35, pages 29205–29216.
  4. On second-order scoring rules for epistemic uncertainty quantification. In Proc. ICML, 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 2078–2091. PMLR.
  5. Axioms for uncertainty measures on belief functions and credal sets. In Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS), pages 1–6. IEEE.
  6. Imprecise Bayesian neural networks. arXiv preprint arXiv:2302.09656.
  7. A novel bayes’ theorem for upper probabilities. arXiv preprint arXiv:2307.06831.
  8. Bayesian networks with imprecise probabilities: Theory and application to classification. Data Mining: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification, pages 49–93.
  9. Elements of Information Theory. John Wiley & Sons.
  10. Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning. In Proc. ICML, 35th International Conference on Machine Learning, pages 1184–1193. PMLR.
  11. Durrett, R. (2010). Probability: Theory and Examples. Cambridge University Press.
  12. Gal, Y. (2016). Uncertainty in Deep Learning. PhD thesis, University of Cambridge.
  13. Bayesian Data Analysis. CRC Press.
  14. Sources of uncertainty in machine learning – a statisticians’ view. arXiv preprint arXiv:2305.16703.
  15. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745.
  16. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506.
  17. On uncertainty, tempering, and data augmentation in Bayesian classification. In Proc. NeurIPS, 35th Advances in Neural Information Processing Systems, volume 35, pages 18211–18225.
  18. What uncertainties do we need in Bayesian deep learning for computer vision? In Proc. NeurIPS, 30th Advances in Neural Information Processing Systems, volume 30, pages 5574–5584.
  19. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86.
  20. Reliable confidence measures for medical diagnosis with evolutionary algorithms. IEEE Transactions on Information Technology in Biomedicine, 15(1):93–99.
  21. Regression prior networks. arXiv preprint arXiv:2006.11590.
  22. The unreasonable effectiveness of deep evidential regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 9134–9142.
  23. Multivariate deep evidential regression. arXiv preprint arXiv:2104.06135.
  24. DropConnect is effective in modeling uncertainty of Bayesian deep networks. Scientific Reports, 11:5458.
  25. Uncertainty measures for evidential reasoning II: A new measure of total uncertainty. International Journal of Approximate Reasoning, 8(1):1–16.
  26. Is the volume of a credal set a good measure for epistemic uncertainty? In Proc. UAI, 39th Conference on Uncertainty in Artificial Intelligence, pages 1795–1804. PMLR.
  27. Introducing an improved information-theoretic measure of predictive uncertainty. In NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning.
  28. Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. Information Sciences, 255:16–29.
  29. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3):379–423.
  30. Understanding measures of uncertainty for adversarial example detection. In Proc. UAI, 34th Conference on Uncertainty in Artificial Intelligence, pages 560–570.
  31. Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estimation. Transaction of Machine Learning Research.
  32. Varshney, K. R. (2016). Engineering safety in machine learning. In 2016 Information Theory and Applications Workshop (ITA), pages 1–5. IEEE.
  33. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. Big Data, 5(3):246–255.
  34. Villani, C. (2009). Optimal Transport: Old and New, volume 338. Springer Science & Business Media.
  35. Villani, C. (2021). Topics in Optimal Transportation, volume 58. American Mathematical Society.
  36. Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. Chapman & Hall.
  37. Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures? In Proc. UAI, 39th Conference on Uncertainty in Artificial Intelligence, pages 2282–2292. PMLR.
  38. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis. BMC Bioinformatics, 10(1):1–14.
Citations (13)

Summary

We haven't generated a summary for this paper yet.