The Explanation Necessity for Healthcare AI (2406.00216v2)
Abstract: Explainability is a critical factor in enhancing the trustworthiness and acceptance of AI in healthcare, where decisions directly impact patient outcomes. Despite advancements in AI interpretability, clear guidelines on when and to what extent explanations are required in medical applications remain lacking. We propose a novel categorization system comprising four classes of explanation necessity (self-explainable, semi-explainable, non-explainable, and new-patterns discovery), guiding the required level of explanation; whether local (patient or sample level), global (cohort or dataset level), or both. To support this system, we introduce a mathematical formulation that incorporates three key factors: (i) robustness of the evaluation protocol, (ii) variability of expert observations, and (iii) representation dimensionality of the application. This framework provides a practical tool for researchers to determine the appropriate depth of explainability needed, addressing the critical question: When does an AI medical application need to be explained, and at what level of detail?