Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Sobolev approximation of linear parabolic PDEs in high dimensions (2306.16811v1)

Published 29 Jun 2023 in math.NA and cs.NA

Abstract: In this paper, we study the error in first order Sobolev norm in the approximation of solutions to linear parabolic PDEs. We use a Monte Carlo Euler scheme obtained from combining the Feynman--Kac representation with a Euler discretization of the underlying stochastic process. We derive approximation rates depending on the time-discretization, the number of Monte Carlo simulations, and the dimension. In particular, we show that the Monte Carlo Euler scheme breaks the curse of dimensionality with respect to the first order Sobolev norm. Our argument is based on new estimates on the weak error of the Euler approximation of a diffusion process together with its derivative with respect to the initial condition. As a consequence, we obtain that neural networks are able to approximate solutions of linear parabolic PDEs in first order Sobolev norm without the curse of dimensionality if the coefficients of the PDEs admit an efficient approximation with neural networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Approximations with deep neural networks in Sobolev time-space. Analysis and Applications 20, 03 (2022), 499–541.
  2. Solving the Kolmogorov PDE by Means of Deep Learning. Journal of Scientific Computing 88, 3 (Jul 2021), 73.
  3. Euler scheme for SDEs with non-Lipschitz diffusion coefficient: strong convergence. ESAIM: Probability and Statistics 12 (2008), 1–11.
  4. Numerically Solving Parametric Families of High-Dimensional Kolmogorov Partial Differential Equations via Deep Learning. In Advances in Neural Information Processing Systems (2020), H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33, Curran Associates, Inc., pp. 16615–16627.
  5. On the weak convergence rate of an exponential Euler scheme for SDEs governed by coefficients with superlinear growth. Bernoulli 27, 1 (2021), 312 – 347.
  6. Machine Learning for Semi Linear PDEs. Journal of Scientific Computing 79, 3SN - 1573-7691 (Jun 2019), 1667–1712.
  7. Efficient Approximation of High-Dimensional Functions With Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 33, 7 (2022), 3079–3093.
  8. Global Convergence of Sobolev Training for Overparameterized Neural Networks. In Machine Learning, Optimization, and Data Science (2020), G. Nicosia, V. Ojha, E. La Malfa, G. Jansen, V. Sciacca, P. Pardalos, G. Giuffrida, and R. Umeton, Eds., Springer International Publishing, pp. 574–586.
  9. Sobolev Training for Neural Networks. In Advances in Neural Information Processing Systems (2017), I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30, Curran Associates, Inc.
  10. On the approximation of functions by tanh neural networks. Neural Networks 143 (2021), 732–750.
  11. Error analysis for physics informed neural networks (PINNs) approximating Kolmogorov PDEs. arxiv:2106.14473v2 (2021).
  12. Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning. Nonlinearity 35, 1 (dec 2021), 278.
  13. Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models. Finance and Stochastics 25, 4 (Oct 2021), 615–657.
  14. Deep neural network approximation for high-dimensional elliptic PDEs with boundary conditions. IMA Journal of Numerical Analysis 42, 3 (05 2021), 2055–2082.
  15. Error bounds for approximations with deep ReLU neural networks in Ws,p norms. Analysis and Applications 18, 05 (2020), 803–859.
  16. Simultaneous neural network approximation for smooth functions. Neural Networks 154 (2022), 152–164.
  17. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks 3, 5 (1990), 551–560.
  18. Convergence rate and stability of the truncated Euler–Maruyama method for stochastic differential equations. Journal of Computational and Applied Mathematics 337 (2018), 274–289.
  19. Differential Machine Learning. arXiv:2005.02347v4 (2020).
  20. Deep backward schemes for high-dimensional nonlinear PDEs. Math. Comp. 89, 324 (2020), 1547–1579.
  21. On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with nonglobally monotone coefficients. The Annals of Probability 48, 1 (2020), 53 – 93.
  22. Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proceedings of the Royal Society A 467 (2011), 1563–1576.
  23. Divergence of the multilevel Monte Carlo Euler method for nonlinear stochastic differential equations. The Annals of Applied Probability 23, 5 (2013), 1913 – 1966.
  24. Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations. Journal of Numerical Mathematics (2022).
  25. Differentiability of SDEs with drifts of super-linear growth. Electronic Journal of Probability 24, none (2019), 1 – 43.
  26. Sobolev Training with Approximated Derivatives for Black-Box Function Regression with Neural Networks. In Machine Learning and Knowledge Discovery in Databases (2020), U. Brefeld, E. Fromont, A. Hotho, A. Knobbe, M. Maathuis, and C. Robardet, Eds., Springer International Publishing, pp. 399–414.
  27. A Theoretical Analysis of Deep Neural Networks and Parametric PDEs. Constructive Approximation 55, 1 (Feb 2022), 73–125.
  28. Numerical Integration of Stochastic Differential Equations with Nonglobally Lipschitz Coefficients. SIAM Journal on Numerical Analysis 43, 3 (2005), 1139–1154.
  29. Strong rate of convergence for the Euler-Maruyama approximation of stochastic differential equations with irregular coefficients. Math. Comp. 85, 300 (2016), 1793–1819.
  30. Approximation of functions and their derivatives: A neural network implementation with applications. Applied Mathematical Modelling 23, 9 (1999), 687–704.
  31. The power of deeper networks for expressing natural functions. In International Conference on Learning Representations (2018).
  32. DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics 375 (2018), 1339–1364.
  33. Sobolev training for physics informed neural networks. arxiv:2101.08932v2 (2021).
  34. Tsay, C. Sobolev trained neural network surrogate models for optimization. Computers and Chemical Engineering 153 (2021), 107419.
  35. Sobolev training of thermodynamic-informed neural networks for interpretable elasto-plasticity models with level set hardening. Computer Methods in Applied Mechanics and Engineering 377 (2021), 113695.
  36. A Note on the Rate of Convergence of the Euler–Maruyama Method for Stochastic Differential Equations. Stochastic Analysis and Applications 26, 2 (2008), 325–333.
  37. Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives. arXiv:2207.10395v1 (2022).
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com