Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Individualized Dosing Dynamics via Neural Eigen Decomposition (2306.14020v1)

Published 24 Jun 2023 in cs.LG

Abstract: Dosing models often use differential equations to model biological dynamics. Neural differential equations in particular can learn to predict the derivative of a process, which permits predictions at irregular points of time. However, this temporal flexibility often comes with a high sensitivity to noise, whereas medical problems often present high noise and limited data. Moreover, medical dosing models must generalize reliably over individual patients and changing treatment policies. To address these challenges, we introduce the Neural Eigen Stochastic Differential Equation algorithm (NESDE). NESDE provides individualized modeling (using a hypernetwork over patient-level parameters); generalization to new treatment policies (using decoupled control); tunable expressiveness according to the noise level (using piecewise linearity); and fast, continuous, closed-form prediction (using spectral representation). We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Model-based reinforcement learning for biological sequence design. In International conference on learning representations, 2019.
  2. Machine learning strategies for time series forecasting. Lecture Notes in Business Information Processing, 138, 01 2013. doi: 10.1007/978-3-642-36318-4_3.
  3. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
  4. Long short-term memory kalman filters:recurrent neural estimators for pose regularization. ICCV, 2017. URL https://github.com/Seleucia/lstmkf_ICCV2017.
  5. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/455cb2657aaa59e32fad80cb0b65b9dc-Paper.pdf.
  6. Pharmacokinetic/pharmacodynamic model for unfractionated heparin dosing during cardiopulmonary bypass. BJA: British Journal of Anaesthesia, 118(5):705–712, May 2017. ISSN 0007-0912. doi: 10.1093/bja/aex044. URL https://doi.org/10.1093/bja/aex044.
  7. Jimmy Ba Diederik P. Kingma. Adam: A method for stochastic optimization. ICLR, 2015. URL https://arxiv.org/abs/1412.6980.
  8. Morris L. Eaton. Multivariate Statistics: a Vector Space Approach. John Wiley and Sons, 1983.
  9. The nephrotoxicity of vancomycin. Clinical Pharmacology & Therapeutics, 102(3):459–469, 2017.
  10. Long short-term memory-based recurrent neural networks for nonlinear target tracking. Signal Processing, 164, 05 2019. doi: 10.1016/j.sigpro.2019.05.027.
  11. Detecting rewards deterioration in episodic reinforcement learning. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 3842–3853. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/greenberg21a.html.
  12. Noise estimation is not optimal: How to use kalman filter the right way. arXiv preprint arXiv:2104.02372, 2021.
  13. Efficient risk-averse reinforcement learning. Advances in Neural Information Processing Systems, 2022.
  14. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
  15. Recursive prediction for long term time series forecasting using advanced models. Neurocomputing, 70:2870–2880, 10 2007. doi: 10.1016/j.neucom.2006.04.015.
  16. Florian Herzog. Stochastic differential equations, 2013. URL https://ethz.ch/content/dam/ethz/special-interest/mavt/dynamic-systems-n-control/idsc-dam/Lectures/Stochastic-Systems/SDE.pdf.
  17. Long short-term memory. Neural Computation, 1997. URL https://direct.mit.edu/neco/article/9/8/1735/6109/Long-Short-Term-Memory.
  18. Mimic-iv, 2020.
  19. R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82(1):35–45, 03 1960. ISSN 0021-9223. doi: 10.1115/1.3662552. URL https://doi.org/10.1115/1.3662552.
  20. Gradient Flow in Recurrent Nets: The Difficulty of Learning LongTerm Dependencies, pages 237–243. Wiley-IEEE Press, 2001. doi: 10.1109/9780470544037.ch14.
  21. Deep kalman filters, 2015.
  22. Identification and preliminary validation of predictors of major bleeding in hospitalized patients starting anticoagulant therapy. The American journal of medicine, 82(4):703–713, 1987.
  23. Kalman filters in non-uniformly sampled multirate systems: For fdi and beyond. Automatica, 44(1):199–208, jan 2008. ISSN 0005-1098. doi: 10.1016/j.automatica.2007.05.009. URL https://doi.org/10.1016/j.automatica.2007.05.009.
  24. Neural sde: Stabilizing neural ode networks with stochastic noise. arXiv preprint arXiv:1906.02355, 2019.
  25. Neural-ode for pharmacokinetics modeling and its advantage to alternative machine learning models in predicting new dosing regimens. Iscience, 24(7):102804, 2021.
  26. Vancomycin. Clinical pharmacokinetics, 51(1):1–13, 2012.
  27. Model-based reinforcement learning: A survey. CoRR, abs/2006.16712, 2020. URL https://arxiv.org/abs/2006.16712.
  28. Hypothesis testing in time series analysis, 1951.
  29. Optimal medication dosing from suboptimal clinical examples: A deep reinforcement learning approach. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 2978–2981. IEEE, 2016.
  30. A systematic literature review on state-of-the-art deep learning methods for process prediction. CoRR, abs/2101.09320, 2021. URL https://arxiv.org/abs/2101.09320.
  31. Intermittent kalman filtering: Eigenvalue cycles and nonuniform sampling. In Proceedings of the 2011 American Control Conference, pages 3692–3697, 2011. doi: 10.1109/ACC.2011.5991285.
  32. Kalmannet: Neural network aided kalman filtering for partially known dynamics, 2021.
  33. Latent ordinary differential equations for irregularly-sampled time series. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/42a6845a557bef704ad8ac9cb4461d43-Paper.pdf.
  34. David E. Rumelhart et al. Learning representations by back-propagating errors. Nature, 1986. URL https://www.nature.com/articles/323533a0.
  35. Therapeutic monitoring of vancomycin in adult patients: A consensus review of the American Society of Health-System Pharmacists, the Infectious Diseases Society of America, and the Society of Infectious Diseases Pharmacists. American Journal of Health-System Pharmacy, 66(1):82–98, 01 2009. ISSN 1079-2082. doi: 10.2146/ajhp080434.
  36. Modeling irregular time series with continuous recurrent units. In International Conference on Machine Learning, pages 19388–19405. PMLR, 2022.
  37. Kalman filtering with intermittent observations. IEEE Transactions on Automatic Control, 49(9):1453–1464, 2004. doi: 10.1109/TAC.2004.834121.
  38. Attention is all you need, 2017.
  39. Runge-kutta neural network for identification of dynamical systems in high accuracy. IEEE Transactions on Neural Networks, 9(2):294–307, 1998.
  40. Reinforcement learning in healthcare: A survey. ACM Computing Surveys (CSUR), 55(1):1–36, 2021.

Summary

We haven't generated a summary for this paper yet.