Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Post Training Uncertainty Calibration of Deep Networks For Medical Image Segmentation (2010.14290v1)

Published 27 Oct 2020 in eess.IV

Abstract: Neural networks for automated image segmentation are typically trained to achieve maximum accuracy, while less attention has been given to the calibration of their confidence scores. However, well-calibrated confidence scores provide valuable information towards the user. We investigate several post hoc calibration methods that are straightforward to implement, some of which are novel. They are compared to Monte Carlo (MC) dropout. They are applied to neural networks trained with cross-entropy (CE) and soft Dice (SD) losses on BraTS 2018 and ISLES 2018. Surprisingly, models trained on SD loss are not necessarily less calibrated than those trained on CE loss. In all cases, at least one post hoc method improves the calibration. There is limited consistency across the results, so we can't conclude on one method being superior. In all cases, post hoc calibration is competitive with MC dropout. Although average calibration improves compared to the base model, subject-level variance of the calibration remains similar.

Citations (27)

Summary

We haven't generated a summary for this paper yet.