Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

MRxaI: Black-Box Explainability for Image Classifiers in a Medical Setting (2311.14471v1)

Published 24 Nov 2023 in cs.CV and cs.AI

Abstract: Existing tools for explaining the output of image classifiers can be divided into white-box, which rely on access to the model internals, and black-box, agnostic to the model. As the usage of AI in the medical domain grows, so too does the usage of explainability tools. Existing work on medical image explanations focuses on white-box tools, such as gradcam. However, there are clear advantages to switching to a black-box tool, including the ability to use it with any classifier and the wide selection of black-box tools available. On standard images, black-box tools are as precise as white-box. In this paper we compare the performance of several black-box methods against gradcam on a brain cancer MRI dataset. We demonstrate that most black-box tools are not suitable for explaining medical image classifications and present a detailed analysis of the reasons for their shortcomings. We also show that one black-box tool, a causal explainability-based rex, performs as well as \gradcam.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. On the accuracy of spectrum-based fault localization. In Testing: Academic and Industrial Conference Practice and Research Techniques-MUTATION (TAICPART-MUTATION 2007), pages 89–98. IEEE, 2007.
  2. Systematic review of artificial intelligence for abnormality detection in high-volume neuroimaging and subgroup meta-analysis for intracranial hemorrhage detection. Clinical Neuroradiology, pages 1–14, 2023.
  3. Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiology: Artificial Intelligence, 3(6):e200267, 2021.
  4. Mateusz Buda. Brain mri segmentation, 2017. https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation.
  5. Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm. Computers in biology and medicine, 109:218–225, 2019.
  6. Survey of explainable ai techniques in healthcare. Sensors, 23(2):634, 2023.
  7. Explaining a series of models by propagating Shapley values. Nature Communications, 13(1), 2022.
  8. Responsibility and blame: A structural-model approach. J. Artif. Intell. Res., 22:93–115, 2004.
  9. Explanations for occluded images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1234–1243, 2021.
  10. Deepharmony: A deep learning approach to contrast harmonization across scanner changes. Magnetic resonance imaging, 64:160–170, 2019.
  11. Lee Raymond Dice. Measures of the amount of ecologic association between species. Ecology, 26:297–302, 1945.
  12. A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records. In 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), pages 1–4. IEEE, 2021.
  13. The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11):e745–e750, 2021.
  14. Fast-aid brain: Fast and accurate segmentation tool using artificial intelligence developed for brain. arXiv preprint arXiv:2208.14360, 2022.
  15. Interpretation of neural networks is fragile. In Proceedings of the AAAI conference on artificial intelligence, pages 3681–3688, 2019.
  16. Evaluation and comparison of cnn visual explanations for histopathology. In Proceedings of the AAAI Conference on Artificial Intelligence Workshops (XAI-AAAI-21), Virtual Event, pages 8–9, 2021.
  17. A review of explainable deep learning cancer detection models in medical imaging. Applied Sciences, 11(10):4573, 2021.
  18. J. Y. Halpern. Actual Causality. MIT Press, Cambridge, MA, 2016.
  19. Causes and explanations: a structural-model approach. Part II: Explanations. British Journal for the Philosophy of Science, 56(4), 2005.
  20. Matthew Hutson. Artificial intelligence faces reproducibility crisis, 2018.
  21. Challenges in explaining brain tumor detection. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems, pages 1–8, 2023.
  22. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774, 2017.
  23. Reproducibility in machine learning for health research: Still a ways to go. Science Translational Medicine, 13(586):eabb1655, 2021.
  24. A Multilinear Sampling Algorithm to Estimate Shapley Values. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 7992–7999, Los Alamitos, CA, USA, 2021. IEEE Computer Society.
  25. Comparing interpretable ai approaches for the clinical environment: an application to covid-19. In 2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pages 1–8. IEEE, 2022.
  26. RISE: randomized input sampling for explanation of black-box models. In British Machine Vision Conference (BMVC). BMVA Press, 2018.
  27. The current and future state of ai interpretation of medical images. New England Journal of Medicine, 388(21):1981–1990, 2023.
  28. Ai in health and medicine. Nature medicine, 28(1):31–38, 2022.
  29. “Why should I trust you?” Explaining the predictions of any classifier. In Knowledge Discovery and Data Mining (KDD), pages 1135–1144. ACM, 2016.
  30. Common pitfalls and recommendations for using machine learning to detect and prognosticate for covid-19 using chest radiographs and ct scans. Nature Machine Intelligence, 3(3):199–217, 2021.
  31. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  32. Benchmarking saliency methods for chest x-ray interpretation. Nature Machine Intelligence, 4(10):867–878, 2022.
  33. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In International Conference on Computer Vision (ICCV), pages 618–626. IEEE, 2017.
  34. Thorvald Sørensen. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on danish commons. Kongelige Danske Videnskabernes Selskab., 5:1–34, 1948.
  35. Eliminating biasing signals in lung cancer images for prognosis predictions with deep learning. NPJ digital medicine, 2(1):122, 2019.
  36. Improving explainability of deep neural network-based electrocardiogram interpretation using variational auto-encoders. European Heart Journal-Digital Health, 3(3):390–404, 2022.
  37. Bas HM van der Velden. Explainable ai: current status and future potential. European Radiology, pages 1–3, 2023.
  38. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Medical Image Analysis, 79:102470, 2022.
  39. Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ digital medicine, 5(1):48, 2022.
  40. Steps to avoid overuse and misuse of machine learning in clinical research. Nature Medicine, 28(10):1996–1999, 2022.
  41. Deep learning models for triaging hospital head mri examinations. Medical Image Analysis, 78:102391, 2022.

Summary

We haven't generated a summary for this paper yet.