Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Explanation Necessity for Healthcare AI (2406.00216v2)

Published 31 May 2024 in cs.AI

Abstract: Explainability is a critical factor in enhancing the trustworthiness and acceptance of AI in healthcare, where decisions directly impact patient outcomes. Despite advancements in AI interpretability, clear guidelines on when and to what extent explanations are required in medical applications remain lacking. We propose a novel categorization system comprising four classes of explanation necessity (self-explainable, semi-explainable, non-explainable, and new-patterns discovery), guiding the required level of explanation; whether local (patient or sample level), global (cohort or dataset level), or both. To support this system, we introduce a mathematical formulation that incorporates three key factors: (i) robustness of the evaluation protocol, (ii) variability of expert observations, and (iii) representation dimensionality of the application. This framework provides a practical tool for researchers to determine the appropriate depth of explainability needed, addressing the critical question: When does an AI medical application need to be explained, and at what level of detail?

Citations (1)

Summary

We haven't generated a summary for this paper yet.