Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use (1905.05134v2)

Published 13 May 2019 in cs.LG and stat.ML

Abstract: Translating ML models effectively to clinical practice requires establishing clinicians' trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the field suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians' trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sana Tonekaboni (11 papers)
  2. Shalmali Joshi (24 papers)
  3. Melissa D McCradden (2 papers)
  4. Anna Goldenberg (41 papers)
Citations (349)

Summary

Contextualizing Explainable Machine Learning for Clinical Utilization

The paper "What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use" by Tonekaboni et al. addresses a crucial barrier in the translational application of ML within healthcare—establishing trust through explainability. While ML models hold great promise for clinical support, their integration into clinical settings, particularly in high-stakes environments like Intensive Care Units (ICUs) and Emergency Departments (EDs), remains sporadic. The authors embark on a methodical investigation to delineate clinician-specific needs that elucidate the often abstract notion of ML explainability in clinical practice.

Surveying Clinicians for Explainability

The paper reports on an exploratory paper involving clinicians from two acute care specialties, ICU and ED. The aim was to discern how explainability could enhance trust and adoption of ML systems in these settings. The surveyed clinicians were asked to engage with hypothetical scenarios involving ML tools predicting cardiac arrest possibilities and patient acuity. Such interactions led to identifying the types of explanations—feature importance, instance-level explanations, uncertainty, and temporal explanations—that clinicians find most useful.

Essential Explainability Components

Clinicians articulated that merely knowing a model's overall accuracy is insufficient for trust—understanding the reasoning behind predictions is paramount. They prioritized explanations that delineate feature importance, allowing them to juxtapose model output with clinical intuition. The representation of uncertainty clearly emerged as a necessity, aiding clinicians in making informed decisions in critical time-constrained settings. Furthermore, transparent designs and temporal data interpretations were advocated as vital for integrating ML systems into routine clinical workflows.

Metrics and Future Considerations

The paper also proposes specific metrics for evaluating the utility of explainability in ML applications to enhance clinical relevance. These include domain-appropriate representation, consistency, and the potential for actionable insights. Highlighting these metrics, the authors suggest a roadmap for future research to address current gaps in ML explainability tailored to the clinical milieu.

Observations and Implications

The authors critically appraise existing ML methods for their applicability to clinical settings, identifying potential mismatches—such as complex data dependence and variations in underlying model designs—that can impede straightforward explainability. The intersection of technical precision and clinician-centric utility is positioned as a fertile ground for advancing ML implementation in healthcare, with potential implications for increased adoption and trust in decision-support systems.

Conclusion

This paper is foundational in its approach to bridge the gap between clinical practice and ML research by recommending a clinician-informed framework for explainability. It reinforces the imperative that ML systems in healthcare should not merely aim for predictive accuracy but must concomitantly invest in tailored, context-specific explanation methodologies. Further exploration into these personalized explanatory frameworks could substantiate the clinical utility of ML tools, fostering transformative advances in patient care outcomes.