Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rad4XCNN: a new agnostic method for post-hoc global explanation of CNN-derived features by means of radiomics (2405.02334v1)

Published 26 Apr 2024 in cs.CV, cs.AI, and cs.LG

Abstract: In the last years, AI in clinical decision support systems (CDSS) played a key role in harnessing machine learning and deep learning architectures. Despite their promising capabilities, the lack of transparency and explainability of AI models poses significant challenges, particularly in medical contexts where reliability is a mandatory aspect. Achieving transparency without compromising predictive accuracy remains a key challenge. This paper presents a novel method, namely Rad4XCNN, to enhance the predictive power of CNN-derived features with the interpretability inherent in radiomic features. Rad4XCNN diverges from conventional methods based on saliency map, by associating intelligible meaning to CNN-derived features by means of Radiomics, offering new perspectives on explanation methods beyond visualization maps. Using a breast cancer classification task as a case study, we evaluated Rad4XCNN on ultrasound imaging datasets, including an online dataset and two in-house datasets for internal and external validation. Some key results are: i) CNN-derived features guarantee more robust accuracy when compared against ViT-derived and radiomic features; ii) conventional visualization map methods for explanation present several pitfalls; iii) Rad4XCNN does not sacrifice model accuracy for their explainability; iv) Rad4XCNN provides global explanation insights enabling the physician to analyze the model outputs and findings. In addition, we highlight the importance of integrating interpretability into AI models for enhanced trust and adoption in clinical practice, emphasizing how our method can mitigate some concerns related to explainable AI methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. Shinjini Kundu. Ai in medicine must be explainable. Nature medicine, 27(8):1328–1328, 2021. doi:10.1038/s41591-021-01461-z.
  2. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020. doi:10.1016/j.inffus.2019.12.012.
  3. Andrew Smith and F Director. Using artificial intelligence and algorithms. FTC, Apr, 2020.
  4. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42, 2018. doi:10.1145/3236009.
  5. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20:1–9, 2020. doi:10.1186/s12911-020-01332-6.
  6. A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine, 133:102423, 2022. doi:10.1016/j.artmed.2022.102423.
  7. Andreas Holzinger. Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Informatics, 3(2):119–131, 2016. doi:10.1007/s40708-016-0042-6.
  8. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4):e1312, 2019. doi:10.1002/widm.1312.
  9. Aaron M Bornstein. Is artificial intelligence permanently inscrutable. Nautilus, 40, 2016.
  10. The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11):e745–e750, 2021. doi:10.1016/S2589-7500(21)00208-9.
  11. Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based. Journal of clinical epidemiology, 142:252–257, 2022. doi:10.1016/j.jclinepi.2021.11.001.
  12. Alex John London. Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Center Report, 49(1):15–21, 2019. doi:10.1002/hast.973.
  13. Explainability as a user requirement for artificial intelligence systems. Computer, 55(2):90–94, 2022. doi:10.1109/MC.2021.3127753.
  14. Quantitative evaluation of saliency-based explainable artificial intelligence (xai) methods in deep learning-based mammogram analysis. European Journal of Radiology, page 111356, 2024.
  15. A yolo-based model for breast cancer detection in mammograms. Cognitive Computation, 16(1):107–120, 2024a.
  16. Overlooked trustworthiness of explainability in medical ai. medRxiv, pages 2021–12, 2021.
  17. Radiomics: images are more than pictures, they are data. Radiology, 278(2):563–577, 2016. doi:10.1148/radiol.2015151169.
  18. Radiomics: the bridge between medical imaging and personalized medicine. Nature reviews Clinical oncology, 14(12):749–762, 2017. doi:10.1038/nrclinonc.2017.141.
  19. Interpretable radiomic signature for breast microcalcification detection and classification. Journal of Imaging Informatics in Medicine, 2024b. doi:10.1007/s10278-024-01012-1.
  20. Breast cancer classification through multivariate radiomic time series analysis in dce-mri sequences. Expert Systems with Applications, 249:123557, 2024c. doi:10.1016/j.eswa.2024.123557.
  21. Deep neural networks and machine learning radiomics modelling for prediction of relapse in mantle cell lymphoma. Cancers, 14(8):2008, 2022. doi:10.3390/cancers14082008.
  22. Deep learning vs. radiomics for predicting axillary lymph node metastasis of breast cancer using ultrasound images: don’t forget the peritumoral region. Frontiers in oncology, 10:53, 2020. doi:10.3389/fonc.2020.00053.
  23. Radiomic versus convolutional neural networks analysis for classification of contrast-enhancing lesions at multiparametric breast mri. Radiology, 290(2):290–297, 2019. doi:10.1148/radiol.2018181352.
  24. Peng Wei. Radiomics, deep learning and early diagnosis in oncology. Emerging topics in life sciences, 5(6):829–835, 2021. doi:10.1042/ETLS20210218.
  25. Is attention all you need in medical image analysis? a review. IEEE Journal of Biomedical and Health Informatics, 2023.
  26. Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, page 102301, 2024.
  27. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11):1254–1259, 1998. doi:10.1109/34.730558.
  28. A 3d explainability framework to uncover learning patterns and crucial sub-regions in variable sulci recognition. arXiv preprint arXiv:2309.00903, 2023.
  29. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. doi:10.48550/arXiv.1312.6034.
  30. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017.
  31. Adaptive deconvolutional networks for mid and high level feature learning. In 2011 international conference on computer vision, pages 2018–2025. IEEE, 2011. doi:10.1109/ICCV.2011.6126474.
  32. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pages 818–833. Springer, 2014. doi:10.1007/978-3-319-10590-1_53.
  33. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. doi:10.48550/arXiv.1412.6806.
  34. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
  35. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  36. Eigen-cam: Class activation map using principal components. In 2020 international joint conference on neural networks (IJCNN), pages 1–7. IEEE, 2020.
  37. Explainable machine-learning models for COVID-19 prognosis prediction using clinical, laboratory and radiomic features. IEEE Access, 11:121492–121510, 2023a. doi:10.1109/ACCESS.2023.3327808.
  38. Deep learning covid-19 features on cxr using limited training data sets. IEEE transactions on medical imaging, 39(8):2688–2700, 2020.
  39. Bs-net: Learning covid-19 pneumonia severity on a large chest x-ray dataset. Medical Image Analysis, 71:102046, 2021.
  40. Ct radiomic features and clinical biomarkers for predicting coronary artery disease. Cognitive Computation, 15(1):238–253, 2023. doi:10.1007/s12559-023-10118-7.
  41. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206–215, 2019. doi:10.1038/s42256-019-0048-x.
  42. Explainable medical imaging ai needs human-centered design: guidelines and evidence from a systematic review. npj Digital Medicine, 5(1):156, 2022. doi:10.1038/s41746-022-00699-2.
  43. Explaining deep features using radiologist-defined semantic features and traditional quantitative features. Tomography, 5(1):192–200, 2019.
  44. Deep learning for liver tumor diagnosis part ii: convolutional neural network interpretation using radiologic imaging features. European radiology, 29:3348–3357, 2019.
  45. Automatic scoring of multiple semantic attributes with multi-task feature leverage: a study on pulmonary nodules in ct images. IEEE transactions on medical imaging, 36(3):802–814, 2016.
  46. The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping. Radiology, 295(2):328–338, 2020. doi:10.1148/radiol.2020191145.
  47. Semantic characteristic grading of pulmonary nodules based on deep neural networks. BMC Medical Imaging, 23(1):156, 2023.
  48. Dataset of breast ultrasound images. Data in brief, 28:104863, 2020. doi:10.1016/j.dib.2019.104863.
  49. How should screening breast US be audited? the BI-RADS perspective. Radiology, 272(2):316–320, 2014. doi:10.1148/radiol.14140634.
  50. Letter to the editor. re: “[dataset of breast ultrasound images by w. al-dhabyani, m. gomaa, h. khaled & a. fahmy, data in brief, 2020, 28, 104863]”. Data in Brief, 48:109247, 2023. ISSN 2352-3409. doi:10.1016/j.dib.2023.109247.
  51. Score-cam: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 24–25, 2020.
  52. Computational radiomics system to decode the radiographic phenotype. Cancer research, 77(21):e104–e107, 2017. doi:10.1158/0008-5472.CAN-17-0339.
  53. Impact of wavelet kernels on predictive capability of radiomic features: A case study on COVID-19 chest X-ray images. Journal of Imaging, 9(2):32, 2023b. doi:10.3390/jimaging9020032.
  54. Robustness analysis of DCE-MRI-derived radiomic features in breast masses: Assessing quantization levels and segmentation agreement. Applied Sciences, 12(11):5512, 2022. doi:10.3390/app12115512.
  55. How to develop a meaningful radiomic signature for clinical use in oncologic patients. Cancer Imaging, 20:1–10, 2020. doi:10.1186/s40644-020-00311-4.
  56. ML-Based radiomics analysis for breast cancer classification in DCE-MRI. In Mufti Mahmud, Cosimo Ieracitano, M. Shamim Kaiser, Nadia Mammone, and Francesco Carlo Morabito, editors, Applied Intelligence and Informatics, pages 144–158. Springer Nature Switzerland, 2022. doi:10.1007/978-3-031-24801-6_11.
  57. Analysis of radiomic features derived from post-contrast t1-weighted images and apparent diffusion coefficient (adc) maps for breast lesion evaluation: A retrospective study. Radiography, 29(2):355–361, 2023. doi:10.1016/j.radi.2023.01.019.
  58. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  59. It’s just not that simple: an empirical study of the accuracy-explainability trade-off in machine learning for public policy. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 248–266, 2022. doi:10.1145/3531146.3533090.
  60. Trading off accuracy and explainability in ai decision-making: findings from 2 citizens’ juries. Journal of the American Medical Informatics Association, 28(10):2128–2138, 2021. doi:10.1093/jamia/ocab127.
  61. Radiogenomic analysis of prediction her2 status in breast cancer by linking ultrasound radiomic feature module with biological functions. Journal of Translational Medicine, 21(1):44, 2023.
  62. Grayscale ultrasound radiomic features and shear-wave elastography radiomic features in benign and malignant breast masses. Ultraschall in der Medizin-European Journal of Ultrasound, 41(04):390–396, 2020.
  63. Benign versus malignant solid breast masses: Us differentiation. Radiology, 213(3):889–894, 1999.
  64. Risk of malignancy in nonpalpable thyroid nodules: predictive value of ultrasound and color-doppler features. The Journal of Clinical Endocrinology & Metabolism, 87(5):1941–1946, 2002.
  65. Acr thyroid imaging, reporting and data system (ti-rads): white paper of the acr ti-rads committee. Journal of the American college of radiology, 14(5):587–595, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com