Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Self-Adaptive Penalty Method for Integrating Prior Knowledge Constraints into Neural ODEs (2307.14940v3)

Published 27 Jul 2023 in cs.LG and math.OC

Abstract: The continuous dynamics of natural systems has been effectively modelled using Neural Ordinary Differential Equations (Neural ODEs). However, for accurate and meaningful predictions, it is crucial that the models follow the underlying rules or laws that govern these systems. In this work, we propose a self-adaptive penalty algorithm for Neural ODEs to enable modelling of constrained natural systems. The proposed self-adaptive penalty function can dynamically adjust the penalty parameters. The explicit introduction of prior knowledge helps to increase the interpretability of Neural ODE -based models. We validate the proposed approach by modelling three natural systems with prior knowledge constraints: population growth, chemical reaction evolution, and damped harmonic oscillator motion. The numerical experiments and a comparison with other penalty Neural ODE approaches and \emph{vanilla} Neural ODE, demonstrate the effectiveness of the proposed self-adaptive penalty algorithm for Neural ODEs in modelling constrained natural systems. Moreover, the self-adaptive penalty approach provides more accurate and robust models with reliable and meaningful predictions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. M. M. Ali and W. X. Zhu. 2013. A Penalty Function-Based Differential Evolution Algorithm for Constrained Global Optimization. Computational Optimization and Applications 54, 3 (apr 2013), 707–739. https://doi.org/10.1007/s10589-012-9498-3
  2. Numerical influence of ReLU’(0) on backpropagation. Advances in Neural Information Processing Systems 34 (2021), 468–479.
  3. Physics-Informed Neural Networks (PINNs) for Fluid Mechanics: A Review. Acta Mechanica Sinica 37, 12 (dec 2021), 1727–1738. https://doi.org/10.1007/s10409-021-01148-1
  4. Neural ordinary differential equations. Advances in neural information processing systems 31 (2018).
  5. Damped Harmonic Oscillator Dataset. https://doi.org/10.34740/KAGGLE/DS/3147798
  6. Synthetic Chemical Reaction. https://doi.org/10.34740/KAGGLE/DS/3010478
  7. World Population Growth. https://doi.org/10.34740/KAGGLE/DS/3010437
  8. Theoretical and Practical Convergence of a Self-Adaptive Penalty Algorithm for Constrained Global Optimization. Journal of Optimization Theory and Applications 174, 3 (sep 2017), 875–893. https://doi.org/10.1007/s10957-016-1042-7
  9. A Novel Estimation Method for the State of Health of Lithium-Ion Battery Using Prior Knowledge-Based Neural Network and Markov Chain. IEEE Transactions on Industrial Electronics 66, 10 (oct 2019), 7706–7716. https://doi.org/10.1109/TIE.2018.2880703
  10. A brief history of filter methods. Preprint ANL/MCS-P1372-0906, Argonne National Laboratory, Mathematics and Computer Science Division 36 (2006).
  11. Jorge Nocedal and Stephen J. Wright. 2006. Numerical Optimization (second edition ed.). Springer, New York, NY.
  12. Kinetics Parameter Optimization via Neural Ordinary Differential Equations. arXiv:2209.01862 (sep 2022). arXiv:arXiv:2209.01862
  13. Constrained Neural Ordinary Differential Equations with Stability Guarantees. arXiv:2004.10883 (apr 2020). arXiv:arXiv:2004.10883
  14. Informed Machine Learning – A Taxonomy and Survey of Integrating Prior Knowledge into Learning Systems. IEEE Transactions on Knowledge and Data Engineering 35, 1 (jan 2023), 614–633. https://doi.org/10.1109/TKDE.2021.3079836
  15. A Continuous Glucose Monitoring Measurements Forecasting Approach via Sporadic Blood Glucose Monitoring. In 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 860–863. https://doi.org/10.1109/BIBM55620.2022.9995522

Summary

We haven't generated a summary for this paper yet.