Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coupling parameter and particle dynamics for adaptive sampling in Neural Galerkin schemes (2306.15630v1)

Published 27 Jun 2023 in math.NA, cs.LG, and cs.NA

Abstract: Training nonlinear parametrizations such as deep neural networks to numerically approximate solutions of partial differential equations is often based on minimizing a loss that includes the residual, which is analytically available in limited settings only. At the same time, empirically estimating the training loss is challenging because residuals and related quantities can have high variance, especially for transport-dominated and high-dimensional problems that exhibit local features such as waves and coherent structures. Thus, estimators based on data samples from un-informed, uniform distributions are inefficient. This work introduces Neural Galerkin schemes that estimate the training loss with data from adaptive distributions, which are empirically represented via ensembles of particles. The ensembles are actively adapted by evolving the particles with dynamics coupled to the nonlinear parametrizations of the solution fields so that the ensembles remain informative for estimating the training loss. Numerical experiments indicate that few dynamic particles are sufficient for obtaining accurate empirical estimates of the training loss, even for problems with local features and with high-dimensional spatial domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. The heterogeneous multiscale method. Acta Numerica, 21:1–87, 2012.
  2. W. Anderson and M. Farazmand. Evolution of nonlinear reduced-order solutions for PDEs with conserved quantities. SIAM Journal on Scientific Computing, 44(1):A176–A197, 2022.
  3. W. Anderson and M. Farazmand. Fast and scalable computation of shape-morphing nonlinear solutions with application to evolutional neural networks. arXiv, 2207.13828, 2023.
  4. An “empirical interpolation" method: Application to efficient reduced-basis discretization of partial differential equations. Comptes Rendus Mathématique. Académie des Sciences. Paris, I:339–667, 2004.
  5. Error analysis of kernel/GP methods for nonlinear and parametric PDEs. arXiv, 2305.04962, 2023.
  6. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev., 57(4):483–531, 2015.
  7. J. Berg and K. Nyström. A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing, 317:28–41, nov 2018.
  8. M. Berger and P. Colella. Local adaptive mesh refinement for shock hydrodynamics. Journal of Computational Physics, 82(1):64–84, 1989.
  9. Adaptive mesh refinement using wave-propagation algorithms for hyperbolic systems. SIAM J. Numer. Anal., 35:2298–2316, 1998.
  10. Projection-based model reduction with dynamically transformed modes. ESAIM: M2AN, 54(6):2011–2043, 2020.
  11. Neural Galerkin scheme with active learning for high-dimensional evolution equations. arXiv preprint arXiv:2203.01360, 2022.
  12. S. Chaturantabut and D. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010.
  13. CROM: Continuous reduced-order modeling of PDEs using implicit neural representations. In The Eleventh International Conference on Learning Representations, 2023.
  14. Solving and learning nonlinear PDEs with Gaussian processes. Journal of Computational Physics, 447:110668, 2021.
  15. A. Cohen and R. DeVore. Kolmogorov widths under holomorphic mappings. IMA J. Numer. Anal., 36(1):1–12, 2016.
  16. Optimal stable nonlinear approximation. Foundations of Computational Mathematics, 2021:1–42, 2021.
  17. Nonlinear compressive reduced basis approximation for PDE’s. HAL preprint, 04031976: , 2023.
  18. Quasi-optimal sampling to learn basis updates for online adaptive model reduction with adaptive empirical interpolation. In 2020 American Control Conference (ACC), pages 2472–2477, 2020.
  19. P. A. M. Dirac. Note on exchange phenomena in the Thomas atom. Mathematical Proceedings of the Cambridge Philosophical Society, 26(3):376–385, 1930.
  20. Neural-network-based approximations for solving partial differential equations. Communications in Numerical Methods in Engineering, 10(3):195–201, 1994.
  21. J. Dormand and P. Prince. A family of embedded Runge-Kutta formulae. Journal of Computational and Applied Mathematics, 6(1):19–26, 1980.
  22. Z. Drmac and S. Gugercin. A new selection operator for the discrete empirical interpolation method—improved a priori error bound and extensions. SIAM Journal on Scientific Computing, 38(2):A631–A648, 2016.
  23. Y. Du and T. A. Zaki. Evolutional deep neural network. Physical Review E, 104(4), oct 2021.
  24. W. E. The dawning of a new era in applied mathematics. Notices of the AMS, 2021.
  25. W. E and B. Engquist. The heterognous multiscale methods. Communications in Mathematical Sciences, 1(1):87 – 132, 2003.
  26. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349–380, nov 2017.
  27. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349–380, Dec 2017.
  28. A general strategy for designing seamless multiscale methods. Journal of Computational Physics, 228(15):5437–5453, 2009.
  29. W. E and T. Yu. The deep Ritz method: A deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6:1–12, 2017.
  30. R. Everson and L. Sirovich. The Karhunen-Loeve Procedure for Gappy Data. Journal of the Optical Society of America, 12:1657–1664, 1995.
  31. A stable and scalable method for solving initial value PDEs with neural networks. In The Eleventh International Conference on Learning Representations, 2023.
  32. A deep learning approach to reduced order modelling of parameter dependent partial differential equations. Math. Comp., 92:483–524, 2023.
  33. J. Frenkel. Wave Mechanics, Advanced General Theory. Clarendon Press, Oxford, 1934.
  34. W. Gao and C. Wang. Active learning based sampling for high-dimensional nonlinear partial differential equations. Journal of Computational Physics, 475:111848, 2023.
  35. J. Gorham and L. Mackey. Measuring sample quality with kernels. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1292–1301. PMLR, 06–11 Aug 2017.
  36. C. Greif and K. Urban. Decay of the Kolmogorov N𝑁Nitalic_N-width for wave problems. Appl. Math. Lett., 96:216–222, 2019.
  37. A novel adaptive causal sampling method for physics-informed neural networks. arXiv, 2210.12914, 2022.
  38. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505–8510, aug 2018.
  39. Reduced basis methods for time-dependent problems. Acta Numerica, 31:265–345, 2022.
  40. Solving for high-dimensional committor functions using artificial neural networks. Research in the Mathematical Sciences, 6(1):1, Oct 2018.
  41. Solving parametric PDE problems with artificial neural networks. European Journal of Applied Mathematics, 32(3):421–435, 2021.
  42. A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. Journal of Computational Physics, 451:110841, 2022.
  43. O. Koch and C. Lubich. Dynamical low-rank approximation. SIAM Journal on Matrix Analysis and Applications, 29(2):434–454, 2007.
  44. C. L. Wight and J. Zhao. Solving Allen-Cahn and Cahn-Hilliard equations using the adaptive physics informed neural networks. Communications in Computational Physics, 29(3):930–954, 2021.
  45. C. Lasser and C. Lubich. Computing quantum dynamics in the semiclassical regime. Acta Numerica, 29:229–401, 2020.
  46. Maximum principle based algorithms for deep learning. Journal of Machine Learning Research, 18(165):1–29, 2018.
  47. Computing committor functions for the study of rare events using deep learning. The Journal of Chemical Physics, 151(5):054112, 2019.
  48. Deep learning via dynamical systems: An approximation perspective. J. Eur. Math. Soc., 25:1671–1709, 2023.
  49. Variational training of neural network approximations of solution maps for physical models. Journal of Computational Physics, 409:109338, 2020.
  50. Q. Liu and D. Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29, pages 2378–2386. Curran Associates, Inc., 2016.
  51. Scaling limit of the Stein variational gradient descent: The mean field regime. SIAM Journal on Mathematical Analysis, 51(2):648–671, 2019.
  52. DeepXDE: A deep learning library for solving differential equations. SIAM Review, 63(1):208–228, jan 2021.
  53. C. Lubich. From Quantum to Classical Molecular Dynamics: Reduced Models and Numerical Analysis. EMS Press, 2008.
  54. Global a priori convergence theory for reduced-basis approximations of single-parameter symmetric coercive elliptic partial differential equations. Comptes Rendus Mathematique, 335(3):289–294, 2002.
  55. Kolmogorov n–width and Lagrangian physics-informed neural networks: A causality-conforming manifold for convection-dominated PDEs. Computer Methods in Applied Mechanics and Engineering, 404:115810, 2023.
  56. Efficient model reduction of parametrized systems by matrix discrete empirical interpolation. Journal of Computational Physics, 303:431–454, 2015.
  57. A reduced basis method by means of transport maps for a fluid–structure interaction problem with slowly decaying Kolmogorov n𝑛nitalic_n-width. Advances in Computational Science and Engineering, 1(1):36–58, 2023.
  58. M. Ohlberger and S. Rave. Nonlinear reduced basis approximation of parameterized evolution equations via the method of freezing. C. R. Math. Acad. Sci. Paris, 351(23-24):901–906, 2013.
  59. B. Peherstorfer. Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42(5):A2803–A2836, 2020.
  60. B. Peherstorfer. Breaking the Kolmogorov barrier with nonlinear model reduction. Notices of the American Mathematical Society, 69:725–733, 2022.
  61. Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM Journal on Scientific Computing, 42:A2837–A2864, 2020.
  62. Reduced Basis Methods for Partial Differential Equations. Springer, 2016.
  63. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019.
  64. Continuous-time nonlinear signal processing: a neural network based approach for gray box identification. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pages 596–605, 1994.
  65. Discrete- vs. continuous-time nonlinear signal processing of Cu electrodissolution data. Chemical Engineering Communications, 118(1):25–48, 1992.
  66. Non-linear manifold reduced-order models with convolutional autoencoders and reduced over-collocation method. Journal of Scientific Computing, 94(3):74, Feb 2023.
  67. Active importance sampling for variational objectives dominated by rare events: Consequences for optimization and generalization. In J. Bruna, J. Hesthaven, and L. Zdeborova, editors, Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, volume 145 of Proceedings of Machine Learning Research, pages 757–780. PMLR, 16–19 Aug 2022.
  68. Reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations. Archives of Computational Methods in Engineering, 15(3):1–47, 2007.
  69. Dynamically orthogonal field equations for continuous stochastic dynamical systems. Physica D: Nonlinear Phenomena, 238(23):2347–2360, 2009.
  70. J. Sirignano and K. Spiliopoulos. DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375:1339–1364, dec 2018.
  71. Analytical and numerical aspects of certain nonlinear evolution equations. III. Numerical, Korteweg-de Vries equation. J. Comput. Phys., 55:231–253, 1984.
  72. DAS-PINNs: A deep adaptive sampling method for solving high-dimensional partial differential equations. Journal of Computational Physics, 476:111868, 2023.
  73. V. Vapnik. Principles of risk minimization for learning theory. In J. Moody, S. Hanson, and R. Lippmann, editors, Advances in Neural Information Processing Systems, volume 4. Morgan-Kaufmann, 1991.
  74. A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 403:115671, 2023.
  75. Geometric subspace updates with applications to online adaptive nonlinear model reduction. SIAM Journal on Matrix Analysis and Applications, 39(1):234–261, 2018.
Citations (12)

Summary

We haven't generated a summary for this paper yet.