Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Physics-enhanced deep surrogates for partial differential equations (2111.05841v4)

Published 10 Nov 2021 in cs.LG and physics.app-ph

Abstract: Many physics and engineering applications demand Partial Differential Equations (PDE) property evaluations that are traditionally computed with resource-intensive high-fidelity numerical solvers. Data-driven surrogate models provide an efficient alternative but come with a significant cost of training. Emerging applications would benefit from surrogates with an improved accuracy-cost tradeoff, while studied at scale. Here we present a "physics-enhanced deep-surrogate" ("PEDS") approach towards developing fast surrogate models for complex physical systems, which is described by PDEs. Specifically, a combination of a low-fidelity, explainable physics simulator and a neural network generator is proposed, which is trained end-to-end to globally match the output of an expensive high-fidelity numerical solver. Experiments on three exemplar testcases, diffusion, reaction-diffusion, and electromagnetic scattering models, show that a PEDS surrogate can be up to 3$\times$ more accurate than an ensemble of feedforward neural networks with limited data ($\approx 103$ training points), and reduces the training data need by at least a factor of 100 to achieve a target error of 5%. Experiments reveal that PEDS provides a general, data-driven strategy to bridge the gap between a vast array of simplified physical models with corresponding brute-force numerical solvers modeling complex systems, offering accuracy, speed, data efficiency, as well as physical insights into the process.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. Active learning of deep surrogates for pdes: application to metasurface design. npj Computational Materials, 6(1):1–7, 2020.
  2. Space mapping. IEEE Microwave Magazine, 9(6):105–122, 2008.
  3. Neural space-mapping optimization for em-based design. IEEE Transactions on Microwave Theory and Techniques, 48(12):2307–2315, 2000.
  4. Coarse-and fine-mesh space mapping for em optimization incorporating mesh deformation. IEEE Microwave and Wireless Components Letters, 29(8):510–512, 2019.
  5. A novel dynamic neuro-space mapping approach for nonlinear microwave device modeling. IEEE Microwave and Wireless Components Letters, 26(2):131–133, 2016.
  6. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
  7. Characterizing metasurfaces/metafilms: the connection between surface susceptibilities and effective material properties. IEEE Antennas Wirel. Propag. Lett., 10:1507–1511, 2011.
  8. Fundamental limitations of polynomial chaos for uncertainty quantification in systems with intermittent instabilities. Communications in mathematical sciences, 11(1):55–103, 2013.
  9. A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients. arXiv preprint arXiv:1809.07321, 2018.
  10. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  11. Vugar E Ismailov. A three layer neural network can represent any multivariate function. Journal of Mathematical Analysis and Applications, 523(1):127096, 2023.
  12. Universal approximation with deep narrow networks. In Conference on learning theory, pages 2306–2327. PMLR, 2020.
  13. Kernel and rich regimes in overparametrized models. In Conference on Learning Theory, pages 3635–3673. PMLR, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Raphaël Pestourie (15 papers)
  2. Youssef Mroueh (66 papers)
  3. Chris Rackauckas (23 papers)
  4. Payel Das (104 papers)
  5. Steven G. Johnson (126 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com