Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable artificial intelligence model to predict acute critical illness from electronic health records (1912.01266v1)

Published 3 Dec 2019 in cs.AI, cs.LG, stat.AP, and stat.ML

Abstract: We developed an explainable AI early warning score (xAI-EWS) system for early detection of acute critical illness. While maintaining a high predictive performance, our system explains to the clinician on which relevant electronic health records (EHRs) data the prediction is grounded. Acute critical illness is often preceded by deterioration of routinely measured clinical parameters, e.g., blood pressure and heart rate. Early clinical prediction is typically based on manually calculated screening metrics that simply weigh these parameters, such as Early Warning Scores (EWS). The predictive performance of EWSs yields a tradeoff between sensitivity and specificity that can lead to negative outcomes for the patient. Previous work on EHR-trained AI systems offers promising results with high levels of predictive performance in relation to the early, real-time prediction of acute critical illness. However, without insight into the complex decisions by such system, clinical translation is hindered. In this letter, we present our xAI-EWS system, which potentiates clinical translation by accompanying a prediction with information on the EHR data explaining it.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
Citations (264)

Summary

Explainable AI Model for Predicting Acute Critical Illness from Electronic Health Records

The paper introduces an explainable artificial intelligence (xAI) model designed for the early prediction of acute critical illnesses using electronic health records (EHRs). This model aims to enhance clinical practice by providing insights into the predictive factors it identifies, thereby increasing its acceptability and potential adoption by healthcare professionals.

Background and Motivation

Existing Early Warning Score systems, such as MEWS and SOFA, have limitations in predictive performance due to their reliance on simple weighted metrics and a consequent tradeoff between sensitivity and specificity. Recent advances in AI have demonstrated improved predictive capabilities for real-time detection of acute conditions by leveraging vast amounts of EHR data. However, the opaque nature of typical AI models hinders their clinical translation. An explainable AI model seeks to overcome this barrier by elucidating its decision-making process in a manner that clinicians can comprehend and trust.

Methodology

The xAI model employs a Temporal Convolutional Network (TCN) architecture for predicting acute conditions such as sepsis, acute kidney injury (AKI), and acute lung injury (ALI). This neural network structure is adept at processing sequential data inherent in patient health records. The predictive component of the model is complemented by a Deep Taylor Decomposition (DTD) explanation module, which systematically breaks down the prediction to attribute specific input features with relevance scores.

The model was trained on a comprehensive dataset comprising EHRs from four Danish municipalities over a five-year period (2012-2017), encompassing 66,288 unique patients with 163,050 admissions. The population studied had a prevalence of 2.44% for sepsis, 0.75% for AKI, and 1.68% for ALI. The input features were confined to 34 clinical parameters, a strategic decision aimed at improving the interpretability of model explanations.

Results

The xAI-EWS model demonstrates high predictive accuracy, with AUROC values of 0.92, 0.88, and 0.90 for sepsis, AKI, and ALI, respectively. These results signify the model's superior performance in distinguishing true positives from false positives compared to traditional models. The model yields predictions at various time points prior to illness onset, offering a scalable framework for timely intervention.

Critically, the explanation module provides both individual-level and population-level insights. At the individual patient level, the model identifies key clinical features contributing to a particular prediction, thus delivering tailored interpretability. For example, for a patient at high risk of sepsis, high respiratory rate and low plasma albumin emerged as significant indicators. At the population level, the model highlights global parameter importance across the cohort, fostering a broader understanding of common predictive factors.

Implications and Future Directions

By elucidating the reasoning behind AI-driven predictions, the xAI-EWS model aligns with regulatory requirements for transparency and explainability in medical technologies, such as those outlined in the European Union's GDPR and the FDA guidelines. This framework proposes a pathway for the broader integration of AI systems in clinical settings, enhancing clinical decision-making through actionable insights into patient risk factors.

Future research directions include validating the model across different populations to ensure its generalizability and adapting it to predict other critical outcomes, such as electrolyte imbalances and cardiac arrest. Furthermore, refining the ground-truth definitions for conditions like AKI and AKI in the model training phase remains a priority to enhance precision.

In summary, the xAI-EWS presents an innovative approach to applying AI in healthcare by balancing high predictive performance with critical explainability, thereby addressing a pertinent challenge in AI deployment in clinical environments.