Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Second-Order Uncertainty Quantification: Variance-Based Measures (2401.00276v1)

Published 30 Dec 2023 in cs.LG and stat.ML

Abstract: Uncertainty quantification is a critical aspect of machine learning models, providing important insights into the reliability of predictions and aiding the decision-making process in real-world applications. This paper proposes a novel way to use variance-based measures to quantify uncertainty on the basis of second-order distributions in classification problems. A distinctive feature of the measures is the ability to reason about uncertainties on a class-based level, which is useful in situations where nuanced decision-making is required. Recalling some properties from the literature, we highlight that the variance-based measures satisfy important (axiomatic) properties. In addition to this axiomatic approach, we present empirical results showing the measures to be effective and competitive to commonly used entropy-based measures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Introduction to Imprecise Probabilities. John Wiley & Sons.
  2. Axioms for uncertainty measures on belief functions and credal sets. In Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS), pages 1–6. IEEE.
  3. Natural posterior network: Deep Bayesian predictive uncertainty for exponential family distributions. In Proc. ICLR, 10th International Conference on Learning Representations.
  4. Deep learning for classical japanese literature.
  5. Bayesian networks with imprecise probabilities: Theory and application to classification. Data Mining: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification, pages 49–93.
  6. Elements of Information Theory. John Wiley & Sons.
  7. Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning. In Proc. ICML, 35th International Conference on Machine Learning, pages 1184–1193. PMLR.
  8. Evidential uncertainty quantification: A variance-based perspective. arXiv preprint arXiv:2311.11367.
  9. Gal, Y. (2016). Uncertainty in Deep Learning. PhD thesis, University of Cambridge.
  10. Bayesian Data Analysis. CRC Press.
  11. Strictly proper scoring rules, prediction, and estimation. Technical Report 463R, Department of Statistics, University of Washington.
  12. Sources of uncertainty in machine learning – a statisticians’ view. arXiv preprint arXiv:2305.16703.
  13. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society.
  14. Robust bayesian target value optimization. Computers & Industrial Engineering, 180:109279.
  15. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745.
  16. FR3: A fuzzy rule learner for inducing reliable classifiers. IEEE Trans. Fuzzy Syst., 17(1):138–149.
  17. Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison. In Uncertainty in Artificial Intelligence, pages 548–557. PMLR.
  18. Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison. In Cussens, J. and Zhang, K., editors, Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, pages 548–557. PMLR.
  19. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506.
  20. What uncertainties do we need in Bayesian deep learning for computer vision? In Proc. NeurIPS, 30th Advances in Neural Information Processing Systems, volume 30, pages 5574–5584.
  21. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y., editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
  22. Learning multiple layers of features from tiny images.
  23. On information and sufficiency. The annals of mathematical statistics, 22(1):79–86.
  24. Reliable confidence measures for medical diagnosis with evolutionary algorithms. IEEE Transactions on Information Technology in Biomedicine, 15(1):93–99.
  25. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278–2324.
  26. Harder or Different?A Closer Look at Distribution Shift in Dataset Reproduction.
  27. Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control.
  28. DropConnect is effective in modeling uncertainty of Bayesian deep networks. Scientific Reports, 11:5458.
  29. Reading digits in natural images with unsupervised feature learning.
  30. How to measure uncertainty in uncertainty sampling for active learning. Machine Learning, 111(1):89–122.
  31. Uncertainty measures for evidential reasoning II: A new measure of total uncertainty. International Journal of Approximate Reasoning, 8(1):1–16.
  32. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E. B., and Garnett, R., editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035.
  33. Second-order uncertainty quantification: A distance-based approach. arXiv preprint arXiv:2312.00995.
  34. Is the volume of a credal set a good measure for epistemic uncertainty? In Proc. UAI, 39th Conference on Uncertainty in Artificial Intelligence, pages 1795–1804. PMLR.
  35. Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. Information Sciences, 255:16–29.
  36. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3):379–423.
  37. Active Learning for Sequence Tagging with Deep Pre-trained Models and Bayesian Uncertainty Estimates. In Proceedings of the 16th Conference of the EACL, pages 1698–1712. Association for Computational Linguistics.
  38. Understanding measures of uncertainty for adversarial example detection. In Globerson, A. and Silva, R., editors, Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pages 560–569. AUAI Press.
  39. Bayesian optimization with conformal prediction sets. In International Conference on Artificial Intelligence and Statistics, pages 959–986. PMLR.
  40. Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estimation. Transaction of Machine Learning Research.
  41. Varshney, K. R. (2016). Engineering safety in machine learning. In 2016 Information Theory and Applications Workshop (ITA), pages 1–5. IEEE.
  42. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. Big Data, 5(3):246–255.
  43. Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. Chapman & Hall.
  44. Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures? In Proc. UAI, 39th Conference on Uncertainty in Artificial Intelligence, pages 2282–2292. PMLR.
  45. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis. BMC Bioinformatics, 10(1):1–14.
Citations (7)

Summary

We haven't generated a summary for this paper yet.