Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support (2505.10188v1)

Published 15 May 2025 in cs.AI

Abstract: As the field of healthcare increasingly adopts artificial intelligence, it becomes important to understand which types of explanations increase transparency and empower users to develop confidence and trust in the predictions made by ML systems. In shared decision-making scenarios where doctors cooperate with ML systems to reach an appropriate decision, establishing mutual trust is crucial. In this paper, we explore different approaches to generating explanations in eXplainable AI (XAI) and make their underlying arguments explicit so that they can be evaluated by medical experts. In particular, we present the findings of a user study conducted with physicians to investigate their perceptions of various types of AI-generated explanations in the context of diagnostic decision support. The study aims to identify the most effective and useful explanations that enhance the diagnostic process. In the study, medical doctors filled out a survey to assess different types of explanations. Further, an interview was carried out post-survey to gain qualitative insights on the requirements of explanations incorporated in diagnostic decision support. Overall, the insights gained from this study contribute to understanding the types of explanations that are most effective.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Felix Liedeker (1 paper)
  2. Olivia Sanchez-Graillet (1 paper)
  3. Moana Seidler (1 paper)
  4. Christian Brandt (1 paper)
  5. Jörg Wellmer (2 papers)
  6. Philipp Cimiano (25 papers)

Summary

Evaluating Argumentative Explanations in Diagnostic Decision Support

The paper "A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support" addresses a significant concern in healthcare: the integration of explainable artificial intelligence (XAI) in medical decision-making processes. It specifically investigates how different types of AI-generated explanations can be perceived in diagnostic support, particularly focusing on transient loss of consciousness (TLOC). By conducting a user paper with medical professionals, the authors seek to delineate the usefulness, comprehensibility, plausibility, and applicability of various argumentative explanation formats generated by AI.

Approach and Methodology

The authors employ machine learning models to predict medical diagnoses, relying upon Bayesian Networks trained on data compiled from patient reports and historical questionnaire studies. They use these models to generate AI-driven explanations through established methodologies such as LIME and counterfactual analysis. Argument templates are created to transform these explanations into natural language arguments that medical professionals might find intuitive.

Three distinct types of explanations are generated:

  1. Attribution-based explanations that highlight key features justifying a prediction.
  2. Counterfactual explanations that illustrate how altering certain variables might have changed the diagnosis.
  3. Exclusion principle explanations which argue against alternative diagnoses, guiding towards the correct decision.

User Study and Findings

In the empirical evaluation involving eight medical experts, the paper analyzes the perceived value of these AI-generated explanations along several dimensions: comprehensibility, plausibility, completeness, and whether they could be used to explain decisions to peers. Attribution-based explanations were rated highest for comprehensibility and plausibility, suggesting a preference for straightforward explanatory frameworks over complex counterfactual or exclusion-based models.

Despite the favorable evaluations in these dimensions, experts indicated a reluctance to use AI-generated explanations for peer discussions. Interviews conducted post-paper revealed key areas of improvement for AI explanations: enhancing prediction reliability, adapting explanation specificity to case complexity, and balancing presentation details to avoid cognitive overload.

Practical and Theoretical Implications

This paper contributes to the ongoing discourse in eXplainable AI (XAI) by empirically validating the effectiveness of argumentation-based explanations in a medical context. It reinforces the idea that while AI tools can enhance diagnostic decision-making, the format and content of explanations are crucial for gaining professional trust. The paper suggests that a more tailored, interactive explanation model—aligning with the complexity of individual cases—might be necessary to ensure wider adoption in clinical practices.

Future Directions

The results indicate potential pathways for further research, including developing a more interactive explanation system that adjusts for user expertise and case details. This could address existing reservations among medical professionals regarding AI systems. Additionally, examining the use of graphical aids or statistical backups alongside text explanations might offer a comprehensive explanation format that can meet diverse needs within the medical community.

Overall, this paper showcases the importance of aligning AI-generated explanations with human cognitive frameworks in medical decision support systems, pointing towards a future where AI can effectively bridge the gap between sophisticated machine learning predictions and practical clinical applicability.

Youtube Logo Streamline Icon: https://streamlinehq.com