Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Explanation Necessity for Healthcare AI (2406.00216v1)

Published 31 May 2024 in cs.AI

Abstract: Explainability is often critical to the acceptable implementation of AI. Nowhere is this more important than healthcare where decision-making directly impacts patients and trust in AI systems is essential. This trust is often built on the explanations and interpretations the AI provides. Despite significant advancements in AI interpretability, there remains the need for clear guidelines on when and to what extent explanations are necessary in the medical context. We propose a novel categorization system with four distinct classes of explanation necessity, guiding the level of explanation required: patient or sample (local) level, cohort or dataset (global) level, or both levels. We introduce a mathematical formulation that distinguishes these categories and offers a practical framework for researchers to determine the necessity and depth of explanations required in medical AI applications. Three key factors are considered: the robustness of the evaluation protocol, the variability of expert observations, and the representation dimensionality of the application. In this perspective, we address the question: When does an AI medical application need to be explained, and at what level of detail?

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Michail Mamalakis (10 papers)
  2. Graham Murray (12 papers)
  3. John Suckling (9 papers)
  4. Héloïse de Vareilles (2 papers)
  5. Pietro Lio (69 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.