Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Approximating Numerical Fluxes Using Fourier Neural Operators for Hyperbolic Conservation Laws (2401.01783v4)

Published 3 Jan 2024 in math.NA, cs.LG, and cs.NA

Abstract: Traditionally, classical numerical schemes have been employed to solve partial differential equations (PDEs) using computational methods. Recently, neural network-based methods have emerged. Despite these advancements, neural network-based methods, such as physics-informed neural networks (PINNs) and neural operators, exhibit deficiencies in robustness and generalization. To address these issues, numerous studies have integrated classical numerical frameworks with machine learning techniques, incorporating neural networks into parts of traditional numerical methods. In this study, we focus on hyperbolic conservation laws by replacing traditional numerical fluxes with neural operators. To this end, we developed loss functions inspired by established numerical schemes related to conservation laws and approximated numerical fluxes using Fourier neural operators (FNOs). Our experiments demonstrated that our approach combines the strengths of both traditional numerical schemes and FNOs, outperforming standard FNO methods in several respects. For instance, we demonstrate that our method is robust, has resolution invariance, and is feasible as a data-driven method. In particular, our method can make continuous predictions over time and exhibits superior generalization capabilities with out-of-distribution (OOD) samples, which are challenges that existing neural operator methods encounter.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. S. Basir and I. Senocak. Critical investigation of failure modes in physics-informed neural networks. AiAA SCITECH 2022 Forum, 2022.
  2. Out-of-distributional risk bounds for neural operators with applications to the helmholtz equation. arXiv, arXiv:2301.11509, 2023.
  3. Least-squares relu neural network (lsnn) method for linear advection-reaction equation. Journal of Computational Physics, 443:686–707, 2021.
  4. Designing neural networks for hyperbolic conservation laws. arXiv, arXiv:2211.14375, 2022.
  5. On the solution of nonlinear hyperbolic differential equations by finite differences. Communications on Pure and Applied Mathematics, 5(3):243–255, 1952.
  6. L. Fu. A very-high-order teno scheme for all-speed gas dynamics and turbulence. Computer Physics Communications, 244:117–131, 2019.
  7. X. Xiao G. Gupta and P. Bogdan. Multiwavelet-based operator learning for differential equations. Advances in Neural Information Processing Systems, 2021.
  8. S.K. Godunov. A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics. Matematicheskii Sbornik, 47(3):271–306, 1959.
  9. Capacity bounds for the deeponet method of solving differential equations. arXiv, arXiv:2205.11359, 2022.
  10. S. Gottlieb and C.W. Shu. Total variation diminishing runge-kutta schemes. Mathematics of Computation, 67:73–85, 1998.
  11. Uniformly high order essentially non- oscillatory schemes iii. Journal of Computational Physics, 71(2):231–303, 1987.
  12. Learning to control pdes with differentiable physics. ICLR 2020, 2020.
  13. Scale-invariant learning by physics inversion. Advances in Neural Information Processing Systems, 2022.
  14. Generalization Error in Deep Learning. Birkhäuser Cham, 2019.
  15. S. Jin. Runge-kutta methods for hyperbolic conservation laws with stiff relaxation terms. Journal of Computational Physics, 122(1):51–67, 1995.
  16. T. Kim and M. Kang. Bounding the rademacher complexity of fourier neural operators. arXiv, arXiv:2209.05150, 2022.
  17. Enhanced fifth order weno shock-capturing schemes with deep learning. Results in Applied Mathematics, 12(100201), 2021.
  18. On universal approximation and error bounds for fourier neural operators. Journal of Machine Learning Research, 22(290), 2021a.
  19. Neural operator: Learning maps between function spaces. arXiv, arXiv:2108.08481, 2021b.
  20. P.D. Lax. Weak solutions of nonlinear hyperbolic equations and their numerical computation. Communications on Pure and Applied Mathematics, 7(1):159–193, 1954.
  21. B.V. Leer. Towards the ultimate conservative difference scheme. ii. monotonicity and conservation combined in a second-order scheme. Journal of Computational Physics, 14(4):361–370, 1974.
  22. R.J. LeVeque. Numerical Methods for Conservation Laws. Birkhäuser Basel, 1992.
  23. Neural operator: Graph kernel network for partial differential equations. ICLR 2020 Workshop ODE/PDE+DL, 2020.
  24. Fourier neural operator for parametric partial differential equations. ICLR 2021, 2021.
  25. Weighted essentially non-oscillatory schemes. Journal of Computational Physics, 115(1):200–212, 1994.
  26. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229, 2021.
  27. Constraint-aware neural networks for riemann problems. Journal of Computational Physics, 409:109345, 2020.
  28. Kolmogorov n–width and lagrangian physics-informed neural networks: A causality-conforming manifold for convection-dominated pdes. Computer Methods in Applied Mechanics and Engineering, 404:115810, 2023.
  29. K. O’Shea and R. Nash. An introduction to convolutional neural networks. arXiv, arXiv:1511.08458, 2015.
  30. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv, arXiv:2202.11214, 2022.
  31. D. Ray and J.S. Hesthaven. An artificial neural network as a troubled-cell indicator. Journal of Computational Physics, 367:166–191, 2018.
  32. Neural-network-based riemann solver for real fluids and high explosives; application to computational fluid dynamics. Physics of Fluids, 34(11):116121, 2022.
  33. A. Shahabi and R. Ghiassi. A robust second-order godunov-type method for burgers’ equation. International Journal of Applied and Computational Mathematics, 8(82), 2022.
  34. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, San MateoShaftesbury Road, Cambridge, 2014.
  35. J. Sirignano and K. Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375:1339–1364, 2018.
  36. P.K. Sweby. High resolution schemes using flux limiters for hyperbolic conservation laws. SIAM journal on numerical analysis, 21(5):995–1011, 1984.
  37. L.G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
  38. V.N. Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10(5):988–999, 1999.
  39. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 449:110768, 2022.
  40. Learning to discretize: Solving 1d scalar conservation laws via deep reinforcement learning. arXiv, arXiv:1905.11079, 2020.
  41. E. Weinan and B. Yu. The deep ritz method: A deep learning-based numerical algorithm for solving variational problems. Communications in Mathematics and Statistics, 6:1–12, 2018.
  42. U-fno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163, 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.