Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reference Neural Operators: Learning the Smooth Dependence of Solutions of PDEs on Geometric Deformations (2405.17509v1)

Published 27 May 2024 in cs.LG

Abstract: For partial differential equations on domains of arbitrary shapes, existing works of neural operators attempt to learn a mapping from geometries to solutions. It often requires a large dataset of geometry-solution pairs in order to obtain a sufficiently accurate neural operator. However, for many industrial applications, e.g., engineering design optimization, it can be prohibitive to satisfy the requirement since even a single simulation may take hours or days of computation. To address this issue, we propose reference neural operators (RNO), a novel way of implementing neural operators, i.e., to learn the smooth dependence of solutions on geometric deformations. Specifically, given a reference solution, RNO can predict solutions corresponding to arbitrary deformations of the referred geometry. This approach turns out to be much more data efficient. Through extensive experiments, we show that RNO can learn the dependence across various types and different numbers of geometry objects with relatively small datasets. RNO outperforms baseline models in accuracy by a large lead and achieves up to 80% error reduction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Model reduction and neural networks for parametric pdes. The SMAI journal of computational mathematics, 7:121–157, 2021.
  2. Accurate medium-range global weather forecasting with 3d neural networks. Nature, 619(7970):533–538, 2023.
  3. Airfrans: High fidelity computational fluid dynamics dataset for approximating reynolds-averaged navier–stokes solutions. Advances in Neural Information Processing Systems, 35:23463–23478, 2022.
  4. Cao, S. Choose a transformer: Fourier or galerkin. Advances in neural information processing systems, 34:24924–24940, 2021.
  5. Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids, 33(12), 2021.
  6. Shape holomorphy of the stationary navier–stokes equations. SIAM Journal on Mathematical Analysis, 50(2):1720–1752, 2018.
  7. Position information in transformers: An overview. Computational Linguistics, 48(3):733–763, 2022.
  8. Evans, L. C. Partial differential equations, volume 19. American Mathematical Society, 2022.
  9. Gnot: A general neural operator transformer for operator learning. In International Conference on Machine Learning, pp.  12556–12569. PMLR, 2023.
  10. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
  11. Mionet: Learning multiple-input operators via tensor product. SIAM Journal on Scientific Computing, 44(6):A3490–A3514, 2022.
  12. On the geometry transferability of the hybrid iterative numerical solver for differential equations. Computational Mechanics, pp.  1–14, 2023.
  13. Machine learning–accelerated computational fluid dynamics. Proceedings of the National Academy of Sciences, 118(21):e2101784118, 2021.
  14. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020a.
  15. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020b.
  16. Fourier neural operator with learned deformations for pdes on general geometries. arXiv preprint arXiv:2207.05209, 2022a.
  17. Transformer for partial differential equations’ operator learning. arXiv preprint arXiv:2205.13671, 2022b.
  18. Geometry-informed neural operator for large-scale 3d pdes. arXiv preprint arXiv:2309.00583, 2023.
  19. Scalable transformer for pde surrogate modeling. Advances in Neural Information Processing Systems, 36, 2024.
  20. Nuno: A general framework for learning parametric pdes with non-uniform data. arXiv preprint arXiv:2305.18694, 2023.
  21. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  22. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229, 2021.
  23. Neural inverse operators for solving pde inverse problems. arXiv preprint arXiv:2301.11167, 2023.
  24. Multiphysics, C. Introduction to comsol multiphysics®. COMSOL Multiphysics, Burlington, MA, accessed Feb, 9(2018):32, 1998.
  25. The random feature model for input-output maps between banach spaces. SIAM Journal on Scientific Computing, 43(5):A3212–A3243, 2021.
  26. Cfdnet: A deep learning-based accelerator for fluid simulations. In Proceedings of the 34th ACM international conference on supercomputing, pp.  1–12, 2020.
  27. A physics-informed operator regression framework for extracting data-driven continuum models. Computer Methods in Applied Mechanics and Engineering, 373:113500, 2021.
  28. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. arXiv preprint arXiv:2202.11214, 2022.
  29. Random feature attention. arXiv preprint arXiv:2103.02143, 2021.
  30. Random features for large-scale kernel machines. Advances in neural information processing systems, 20, 2007.
  31. Free-form deformation of solid geometric models. In Proceedings of the 13th annual conference on Computer graphics and interactive techniques, pp.  151–160, 1986.
  32. Operator learning with neural fields: Tackling pdes on general geometries. Advances in Neural Information Processing Systems, 36, 2024.
  33. Deep neural operators can serve as accurate surrogates for shape optimization: a case study for airfoils. arXiv preprint arXiv:2302.00807, 2023.
  34. Smith, L. N. Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pp.  464–472. IEEE, 2017.
  35. Introduction to shape optimization. Springer, 1992.
  36. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024.
  37. Accelerating eulerian fluid simulation with convolutional networks. In International Conference on Machine Learning, pp.  3424–3433. PMLR, 2017.
  38. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  39. Scientific discovery in the age of artificial intelligence. Nature, 620(7972):47–60, 2023.
  40. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Science advances, 7(40):eabi8605, 2021.
  41. Improved architectures and training algorithms for deep operator networks. Journal of Scientific Computing, 92(2):35, 2022.
  42. Da-transformer: Distance-aware transformer. arXiv preprint arXiv:2010.06925, 2020.
  43. Do transformers really perform bad for graph representation? arxiv 2021. arXiv preprint arXiv:2106.05234, 2021.
  44. Rethinking the expressive power of gnns via graph biconnectivity. arXiv preprint arXiv:2301.09505, 2023a.
  45. Skilful nowcasting of extreme precipitation with nowcastnet. Nature, 619(7970):526–532, 2023b.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets