Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretability of Uncertainty: Exploring Cortical Lesion Segmentation in Multiple Sclerosis (2407.05761v1)

Published 8 Jul 2024 in eess.IV and cs.CV

Abstract: Uncertainty quantification (UQ) has become critical for evaluating the reliability of artificial intelligence systems, especially in medical image segmentation. This study addresses the interpretability of instance-wise uncertainty values in deep learning models for focal lesion segmentation in magnetic resonance imaging, specifically cortical lesion (CL) segmentation in multiple sclerosis. CL segmentation presents several challenges, including the complexity of manual segmentation, high variability in annotation, data scarcity, and class imbalance, all of which contribute to aleatoric and epistemic uncertainty. We explore how UQ can be used not only to assess prediction reliability but also to provide insights into model behavior, detect biases, and verify the accuracy of UQ methods. Our research demonstrates the potential of instance-wise uncertainty values to offer post hoc global model explanations, serving as a sanity check for the model. The implementation is available at https://github.com/NataliiaMolch/interpret-lesion-unc.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Nataliia Molchanova (9 papers)
  2. Alessandro Cagol (3 papers)
  3. Pedro M. Gordaliza (6 papers)
  4. Mario Ocampo-Pineda (4 papers)
  5. Po-Jui Lu (4 papers)
  6. Matthias Weigel (6 papers)
  7. Xinjie Chen (4 papers)
  8. Adrien Depeursinge (19 papers)
  9. Cristina Granziera (18 papers)
  10. Henning Müller (45 papers)
  11. Meritxell Bach Cuadra (45 papers)

Summary

We haven't generated a summary for this paper yet.