Papers
Topics
Authors
Recent
Search
2000 character limit reached

Confidence Calibration with Bounded Error Using Transformations

Published 25 Feb 2021 in cs.LG | (2102.12680v2)

Abstract: As machine learning techniques become widely adopted in new domains, especially in safety-critical systems such as autonomous vehicles, it is crucial to provide accurate output uncertainty estimation. As a result, many approaches have been proposed to calibrate neural networks to accurately estimate the likelihood of misclassification. However, while these methods achieve low calibration error, there is space for further improvement, especially in large-dimensional settings such as ImageNet. In this paper, we introduce a calibration algorithm, named Hoki, that works by applying random transformations to the neural network logits. We provide a sufficient condition for perfect calibration based on the number of label prediction changes observed after applying the transformations. We perform experiments on multiple datasets and show that the proposed approach generally outperforms state-of-the-art calibration algorithms across multiple datasets and models, especially on the challenging ImageNet dataset. Finally, Hoki is scalable as well, as it requires comparable execution time to that of temperature scaling.

Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.