Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GARNN: An Interpretable Graph Attentive Recurrent Neural Network for Predicting Blood Glucose Levels via Multivariate Time Series (2402.16230v1)

Published 26 Feb 2024 in cs.LG and cs.AI

Abstract: Accurate prediction of future blood glucose (BG) levels can effectively improve BG management for people living with diabetes, thereby reducing complications and improving quality of life. The state of the art of BG prediction has been achieved by leveraging advanced deep learning methods to model multi-modal data, i.e., sensor data and self-reported event data, organised as multi-variate time series (MTS). However, these methods are mostly regarded as ``black boxes'' and not entirely trusted by clinicians and patients. In this paper, we propose interpretable graph attentive recurrent neural networks (GARNNs) to model MTS, explaining variable contributions via summarizing variable importance and generating feature maps by graph attention mechanisms instead of post-hoc analysis. We evaluate GARNNs on four datasets, representing diverse clinical scenarios. Upon comparison with twelve well-established baseline methods, GARNNs not only achieve the best prediction accuracy but also provide high-quality temporal interpretability, in particular for postprandial glucose levels as a result of corresponding meal intake and insulin injection. These findings underline the potential of GARNN as a robust tool for improving diabetes care, bridging the gap between deep learning technology and real-world healthcare solutions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Z. T. Bloomgarden, Diabetes Complications , Diabetes Care 27 (2004) 1506–1514. doi:10.2337/diacare.27.6.1506.
  2. Predicting the onset of diabetes-related complications after a diabetes diagnosis with machine learning algorithms, Diabetes Research and Clinical Practice 204 (2023) 110910. doi:10.1016/j.diabres.2023.110910.
  3. Data-driven modeling and prediction of blood glucose dynamics: Machine learning applications in type 1 diabetes, Artificial Intelligence in Medicine 98 (2019) 109–134. doi:10.1016/J.ARTMED.2019.07.007.
  4. Time in range and complications of diabetes: a cross-sectional analysis of patients with type 1 diabetes, Diabetology & Metabolic Syndrome 15 (2023) 244. doi:10.1186/s13098-023-01219-2.
  5. A machine learning approach to predicting blood glucose levels for diabetes management, in: AAAI Workshop’14, volume WS-14-08, 2014.
  6. Penalty weighted glucose prediction models could lead to better clinically usage, Comput. Biol. Medicine 138 (2021) 104865. doi:10.1016/J.COMPBIOMED.2021.104865.
  7. Causality analysis in type 1 diabetes mellitus with application to blood glucose level prediction, Comput. Biol. Medicine 153 (2023) 106535. doi:10.1016/J.COMPBIOMED.2022.106535.
  8. Iomt-enabled real-time blood glucose prediction with deep learning and edge computing, IEEE Internet of Things Journal 10 (2023a) 3706–3719. doi:10.1109/JIOT.2022.3143375.
  9. Personalized blood glucose prediction for type 1 diabetes using evidential deep learning and meta-learning, IEEE Trans. Biomed. Eng. 70 (2023b) 193–204. doi:10.1109/TBME.2022.3187703.
  10. Enhancing self-management in type 1 diabetes with wearables and deep learning, npj Digital Medicine 5 (2022) 78. doi:10.1038/S41746-022-00626-5.
  11. After-meal blood glucose level prediction using an absorption model for neural network training, Comput. Biol. Medicine 125 (2020) 103956. doi:10.1016/J.COMPBIOMED.2020.103956.
  12. Towards better understanding of gradient-based attribution methods for deep neural networks, in: ICLR’18, 2018.
  13. Exploring interpretable LSTM neural networks over multi-variable data, in: ICML’19, volume 97, 2019, pp. 2494–2504.
  14. S. M. Lundberg, S. Lee, A unified approach to interpreting model predictions, in: NIPS’17, 2017, pp. 4765–4774.
  15. S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput. 9 (1997) 1735–1780. doi:10.1162/NECO.1997.9.8.1735.
  16. A personalized and interpretable deep learning based approach to predict blood glucose concentration in type 1 diabetes, in: KDH@ECAI’20, volume 2675, 2020, pp. 75–79.
  17. The importance of interpreting machine learning models for blood glucose prediction in diabetes: an analysis using shap, Scientific Reports 13 (2023) 16865. doi:10.1038/s41598-023-44155-x.
  18. "why should I trust you?": Explaining the predictions of any classifier, in: ACM KDD’16, 2016, pp. 1135–1144. doi:10.1145/2939672.2939778.
  19. RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism, in: NIPS’16, 2016, pp. 3504–3512.
  20. Explainable tensorized neural ordinary differential equations for arbitrary-step time series prediction, IEEE Trans. Knowl. Data Eng. 35 (2023) 5837–5850. doi:10.1109/TKDE.2022.3167536.
  21. Interpretability of time-series deep learning models: A study in cardiovascular patients admitted to intensive care unit, Journal of Biomedical Informatics 121 (2021) 103876. doi:10.1016/j.jbi.2021.103876.
  22. How attentive are graph attention networks?, in: ICLR’22, 2022.
  23. On the properties of neural machine translation: Encoder-decoder approaches, in: SSST@EMNLP’14, 2014, pp. 103–111. doi:10.3115/V1/W14-4012.
  24. Graph attention networks, in: ICLR’18, 2018.
  25. Learning important features through propagating activation differences, in: ICML’17, volume 70, 2017, pp. 3145–3153.
  26. Noisegrad - enhancing explanations by introducing stochasticity to model weights, in: AAAI’22, 2022, pp. 6132–6140. doi:10.1609/AAAI.V36I6.20561.
  27. What went wrong and when? instance-wise feature importance for time-series black-box models, in: NIPS’20, volume 33, 2020, pp. 799–809.
  28. Y. Kwon, J. Y. Zou, Weightedshap: Analyzing and improving shapley based feature attributions, in: NIPS’22, volume 35, 2022, pp. 34363–34376.
  29. D. Rajapaksha, C. Bergmeir, LIMREF: local interpretable model agnostic rule-based explanations for forecasting, with an application to electricity smart meter data, in: AAAI’22, 2022, pp. 12098–12107. doi:10.1609/AAAI.V36I11.21469.
  30. Inductive granger causal modeling for multivariate time series, in: ICDM’20, 2020, pp. 972–977.
  31. Deep interpretable early warning system for the detection of clinical deterioration, IEEE J. Biomed. Health Informatics 24 (2020) 437–446. doi:10.1109/JBHI.2019.2937803.
  32. Learning of cluster-based feature importance for electronic health record time-series, in: ICML’22, volume 162, 2022, pp. 161–179.
  33. Explainable multivariate time series classification: A deep neural network which learns to attend to important variables as well as time intervals, in: ACM WSDM’21, 2021, pp. 607–615. doi:10.1145/3437963.3441815.
  34. RAIM: recurrent attentive and intensive model of multimodal patient monitoring data, in: ACM KDD’18, 2018, pp. 2565–2573. doi:10.1145/3219819.3220051.
  35. An attention based deep learning model of clinical events in the intensive care unit, PloS one 14 (2019) e0211057. doi:10.1371/journal.pone.0211057.
  36. Chinese diabetes datasets for data-driven machine learning, Scientific Data 10 (2023) 35. doi:10.1038/s41597-023-01940-7.
  37. C. Marling, R. C. Bunescu, The ohiot1dm dataset for blood glucose level prediction: Update 2020, in: KDH@ECAI’20, volume 2675, 2020, pp. 71–74.
  38. Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12 (2011) 2825–2830. doi:10.5555/1953048.2078195.
  39. T. Chen, C. Guestrin, XGBoost: A scalable tree boosting system, in: ACM KDD’16, 2016, pp. 785–794. doi:10.1145/2939672.2939785.
  40. N-BEATS: neural basis expansion analysis for interpretable time series forecasting, in: ICLR’20, 2020.
  41. NHITS: neural hierarchical interpolation for time series forecasting, in: AAAI’23, 2023, pp. 6989–6997. doi:10.1145/2939672.2939778.
  42. A glucose-specific metric to assess predictors and identify models, IEEE Trans. Biomed. Eng. 59 (2012) 1281–1290. doi:10.1109/TBME.2012.2185234.
  43. R. Bevan, F. Coenen, Experiments in non-personalized future blood glucose level prediction, in: KDH@ECAI’20, volume 2675, 2020, pp. 100–104.
  44. C. Piao, K. Li, Blood glucose level prediction: A graph-based explainable method with federated learning, CoRR abs/2312.12541 (2023). doi:10.48550/ARXIV.2312.12541. arXiv:2312.12541.
Citations (1)

Summary

We haven't generated a summary for this paper yet.