Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Vital Sign Forecasting with Model Agnostic Attention Maps (2405.01714v3)

Published 2 May 2024 in cs.LG and cs.AI

Abstract: Sepsis is a leading cause of mortality in intensive care units (ICUs), representing a substantial medical challenge. The complexity of analyzing diverse vital signs to predict sepsis further aggravates this issue. While deep learning techniques have been advanced for early sepsis prediction, their 'black-box' nature obscures the internal logic, impairing interpretability in critical settings like ICUs. This paper introduces a framework that combines a deep learning model with an attention mechanism that highlights the critical time steps in the forecasting process, thus improving model interpretability and supporting clinical decision-making. We show that the attention mechanism could be adapted to various black box time series forecasting models such as N-HiTS and N-BEATS. Our method preserves the accuracy of conventional deep learning models while enhancing interpretability through attention-weight-generated heatmaps. We evaluated our model on the eICU-CRD dataset, focusing on forecasting vital signs for sepsis patients. We assessed its performance using mean squared error (MSE) and dynamic time warping (DTW) metrics. We explored the attention maps of N-HiTS and N-BEATS, examining the differences in their performance and identifying crucial factors influencing vital sign forecasting.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Mortality in sepsis and septic shock in europe, north america and australia between 2009 and 2019—results from a systematic review and meta-analysis, Critical Care 24 (2020) 1–9.
  2. J. E. Gotts, M. A. Matthay, Sepsis: pathophysiology and clinical management, Bmj 353 (2016).
  3. Sepsis definitions: time for change, The Lancet 381 (2013) 774–775.
  4. The critical roles and mechanisms of immune cell death in sepsis, Frontiers in immunology 11 (2020) 1918.
  5. The third international consensus definitions for sepsis and septic shock (sepsis-3), Jama 315 (2016) 801–810.
  6. A transformer architecture for stress detection from ecg, in: Proceedings of the 2021 ACM International Symposium on Wearable Computers, 2021, pp. 132–134.
  7. G. Vilone, L. Longo, Notions of explainability and evaluation approaches for explainable artificial intelligence, Information Fusion 76 (2021) 89–106.
  8. Explainable artificial intelligence: Concepts, applications, research challenges and visions, in: International cross-domain conference for machine learning and knowledge extraction, Springer, 2020, pp. 1–16.
  9. An attention based deep learning model of clinical events in the intensive care unit, PloS one 14 (2019) e0211057.
  10. Deepsofa: a continuous acuity score for critically ill patients using clinically interpretable deep learning, Scientific reports 9 (2019) 1879.
  11. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism, Advances in neural information processing systems 29 (2016).
  12. The eicu collaborative research database, a freely available multi-center database for critical care research, Scientific data 5 (2018) 1–13.
  13. Interpreting forecasted vital signs using n-beats in sepsis patients, arXiv preprint arXiv:2306.14016 (2023).
  14. Characterizing the patients, hospitals, and data quality of the eicu collaborative research database, Critical Care Medicine 48 (2020) 1737–1743.
  15. Nhits: Neural hierarchical interpolation for time series forecasting, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 2023, pp. 6989–6997.
  16. Vital sign forecasting for sepsis patients in icus, arXiv preprint arXiv:2311.04770 (2023).
  17. Interpretable multivariate time series forecasting with temporal attention convolutional neural networks, in: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, 2020, pp. 1687–1694.
  18. Temporal fusion transformers for interpretable multi-horizon time series forecasting, International Journal of Forecasting 37 (2021) 1748–1764.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets