Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying Deep Learning Model Uncertainty in Conformal Prediction (2306.00876v2)

Published 1 Jun 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Uncertainty sets for image classifiers using conformal prediction. arXiv preprint arXiv:2009.14193.
  2. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, 1050–1059. PMLR.
  3. On calibration of modern neural networks. In International Conference on Machine Learning, 1321–1330. PMLR.
  4. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474.
  5. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523): 1094–1111.
  6. Measuring Calibration in Deep Learning. In CVPR Workshops, volume 2.
  7. Inductive confidence machines for regression. In Machine Learning: ECML 2002: 13th European Conference on Machine Learning Helsinki, Finland, August 19–23, 2002 Proceedings 13, 345–356. Springer.
  8. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In International conference on machine learning, 4075–4084. PMLR.
  9. Platt, J.; et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3): 61–74.
  10. Classification with valid and adaptive coverage. Advances in Neural Information Processing Systems, 33: 3581–3591.
  11. Evidential deep learning to quantify classification uncertainty. Advances in neural information processing systems, 31.
  12. Misclassification risk and uncertainty quantification in deep classifiers. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2484–2492.
  13. Machine-Learning Applications of Algorithmic Randomness. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML ’99, 444–453. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. ISBN 1558606122.
  14. Algorithmic learning in a random world, volume 29. Springer.
  15. Evidential deep neural networks for uncertain data classification. In International Conference on Knowledge Science, Engineering and Management, 427–437. Springer.
  16. On reject and refine options in multicategory classification. Journal of the American Statistical Association, 113(522): 730–745.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hamed Karimi (6 papers)
  2. Reza Samavi (11 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.