Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Grad-CAM Explainable in Medical Images? (2307.10506v1)

Published 20 Jul 2023 in eess.IV, cs.CV, and cs.CY

Abstract: Explainable Deep Learning has gained significant attention in the field of AI, particularly in domains such as medical imaging, where accurate and interpretable machine learning models are crucial for effective diagnosis and treatment planning. Grad-CAM is a baseline that highlights the most critical regions of an image used in a deep learning model's decision-making process, increasing interpretability and trust in the results. It is applied in many computer vision (CV) tasks such as classification and explanation. This study explores the principles of Explainable Deep Learning and its relevance to medical imaging, discusses various explainability techniques and their limitations, and examines medical imaging applications of Grad-CAM. The findings highlight the potential of Explainable Deep Learning and Grad-CAM in improving the accuracy and interpretability of deep learning models in medical imaging. The code is available in (will be available).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Subhashis Suara (1 paper)
  2. Aayush Jha (2 papers)
  3. Pratik Sinha (1 paper)
  4. Arif Ahmed Sekh (5 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com