Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Interpretable Deep-Learning Framework for Predicting Hospital Readmissions From Electronic Health Records (2310.10187v1)

Published 16 Oct 2023 in cs.LG and cs.IR

Abstract: With the increasing availability of patients' data, modern medicine is shifting towards prospective healthcare. Electronic health records contain a variety of information useful for clinical patient description and can be exploited for the construction of predictive models, given that similar medical histories will likely lead to similar progressions. One example is unplanned hospital readmission prediction, an essential task for reducing hospital costs and improving patient health. Despite predictive models showing very good performances especially with deep-learning models, they are often criticized for the poor interpretability of their results, a fundamental characteristic in the medical field, where incorrect predictions might have serious consequences for the patient health. In this paper we propose a novel, interpretable deep-learning framework for predicting unplanned hospital readmissions, supported by NLP findings on word embeddings and by neural-network models (ConvLSTM) for better handling temporal data. We validate our system on the two predictive tasks of hospital readmission within 30 and 180 days, using real-world data. In addition, we introduce and test a model-dependent technique to make the representation of results easily interpretable by the medical staff. Our solution achieves better performances compared to traditional models based on machine learning, while providing at the same time more interpretable results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Prospective medicine: the next health care transformation. Academic Medicine, 78(11):1079–1084, 2003.
  2. Office of the National Coordinator for Health Information Technology. Office-based physician electronic health record adoption. https://www.healthit.gov/data/quickstats/office-based-physician-electronic-health-record-adoption, 2021.
  3. Systematic review of comorbidity indices for administrative data. Medical care, pages 1109–1118, 2012.
  4. Medical concept representation learning from electronic health records and its application on heart failure prediction. arXiv preprint arXiv:1602.03686, 2016.
  5. Learning vector representation of medical objects via emr-driven nonnegative restricted boltzmann machines (enrbm). Journal of biomedical informatics, 54:96–105, 2015.
  6. Doctor ai: Predicting clinical events via recurrent neural networks. In Machine learning for healthcare conference, pages 301–318. PMLR, 2016.
  7. Deep ehr: a survey of recent advances in deep learning techniques for electronic health record (ehr) analysis. IEEE journal of biomedical and health informatics, 22(5):1589–1604, 2017.
  8. Hospital readmissions reduction program. Circulation, 131(20):1796–1803, May 2015.
  9. Analysis and prediction of unplanned intensive care unit readmission using recurrent neural networks with long short-term memory. PloS one, 14(7):e0218942, 2019.
  10. Deepr: A convolutional net for medical records. IEEE journal of biomedical and health informatics, 21(1):22–30, 2016.
  11. Bi-directional convlstm u-net with densley connected convolutions. In Proceedings of the IEEE/CVF international conference on computer vision workshops, pages 0–0, 2019.
  12. Renal tumors segmentation in abdomen ct images using 3d-cnn and convlstm. Biomedical Signal Processing and Control, 72:103334, 2022.
  13. Comparison of predictive modeling approaches for 30-day all-cause non-elective readmission risk. BMC medical research methodology, 16(1):1–8, 2016.
  14. Analysis of machine learning techniques for heart failure readmissions. Circulation: Cardiovascular Quality and Outcomes, 9(6):629–640, 2016.
  15. Deepcare: A deep dynamic memory model for predictive medicine. In Pacific-Asia conference on knowledge discovery and data mining, pages 30–41. Springer, 2016.
  16. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1–9, 2016.
  17. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013.
  18. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  19. A comparison of word embeddings for the biomedical natural language processing. Journal of biomedical informatics, 87:12–20, 2018.
  20. Learning low-dimensional representations of medical concepts. AMIA Summits on Translational Science Proceedings, 2016:41, 2016.
  21. Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31–57, 2018.
  22. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
  23. Understanding convolutional neural networks for text classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 56–65, 2018.
  24. Predicting readmission risk from doctors’ notes. arXiv preprint arXiv:1711.10663, 2017.
  25. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
  26. Convolutional lstm network: A machine learning approach for precipitation nowcasting. arXiv preprint arXiv:1506.04214, 2015.
  27. Application of machine learning in predicting hospital readmissions: a scoping review of the literature. BMC medical research methodology, 21(1):1–14, 2021.
  28. Dan Jurafsky. Speech & language processing. Pearson Education India, 2000.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Fabio Azzalini (1 paper)
  2. Tommaso Dolci (3 papers)
  3. Marco Vagaggini (1 paper)

Summary

We haven't generated a summary for this paper yet.