Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Machine-Learned Poisson Solver for Low-Temperature Plasma Simulations in Complex Geometries (2306.07604v1)

Published 13 Jun 2023 in physics.comp-ph, cs.LG, cs.NA, and math.NA

Abstract: Poisson's equation plays an important role in modeling many physical systems. In electrostatic self-consistent low-temperature plasma (LTP) simulations, Poisson's equation is solved at each simulation time step, which can amount to a significant computational cost for the entire simulation. In this paper, we describe the development of a generic machine-learned Poisson solver specifically designed for the requirements of LTP simulations in complex 2D reactor geometries on structured Cartesian grids. Here, the reactor geometries can consist of inner electrodes and dielectric materials as often found in LTP simulations. The approach leverages a hybrid CNN-transformer network architecture in combination with a weighted multiterm loss function. We train the network using highly-randomized synthetic data to ensure the generalizability of the learned solver to unseen reactor geometries. The results demonstrate that the learned solver is able to produce quantitatively and qualitatively accurate solutions. Furthermore, it generalizes well on new reactor geometries such as reference geometries found in the literature. To increase the numerical accuracy of the solutions required in LTP simulations, we employ a conventional iterative solver to refine the raw predictions, especially to recover the high-frequency features not resolved by the initial prediction. With this, the proposed learned Poisson solver provides the required accuracy and is potentially faster than a pure GPU-based conventional iterative solver. This opens up new possibilities for developing a generic and high-performing learned Poisson solver for LTP systems in complex geometries.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. Francis E Low “Gravity” In Classical Field Theory: Electromagnetism and Gravitation John Wiley & Sons, Ltd, 1997, pp. 338–390 DOI: 10.1002/9783527617456.ch8
  2. Pieter Wesseling “Unified methods for computing incompressible and compressible flow” In Principles of Computational Fluid Dynamics 29, Springer Series in Computational Mathematics Springer Berlin Heidelberg, 2009, pp. 567–601 DOI: 10.1007/978-3-642-05146-3˙14
  3. “Foundations of modelling of nonequilibrium low-temperature plasmas” In Plasma Sources Sci. Technol. 27.2, 2018, pp. 023002 DOI: 10.1088/1361-6595/aaa86d
  4. John David Jackson “Classical Electrodynamics, 3rd ed.” Hoboken, USA: John Wiley & Sons, Inc., 1998
  5. Wolfgang Hackbusch “Multi-Grid Methods and Applications” 4, Springer Series in Computational Mathematics Springer Berlin Heidelberg, 1985 DOI: 10.1007/978-3-662-02427-0
  6. U Trottenberg, C W Oosterlee and A Schüller “Multigrid Methods: Basics, Parallelism and Adaptivity” Elsevier Science, 2001
  7. “Geometric multigrid method for solving Poisson’s equation on octree grids with irregular boundaries” In Comput. Phys. Commun. 286, 2023, pp. 108665 DOI: 10.1016/j.cpc.2023.108665
  8. P G Martinsson “A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method” In J. Comput. Phys. 242, 2013, pp. 460–479 DOI: 10.1016/j.jcp.2013.02.019
  9. Daniel Fortunato, Nicholas Hale and Alex Townsend “The ultraspherical spectral element method” In J. Comput. Phys. 436, 2021, pp. 110087 DOI: 10.1016/j.jcp.2020.110087
  10. I L Semenov and K-D Weltmann “A spectral element method for modelling streamer discharges in low-temperature atmospheric-pressure plasmas” In J. Comput. Phys. 465, 2022, pp. 111378 DOI: 10.1016/j.jcp.2022.111378
  11. Alex Krizhevsky, Ilya Sutskever and Geoffrey E Hinton “ImageNet Classification with Deep Convolutional Neural Networks” In Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS’12 Lake Tahoe, Nevada: Curran Associates Inc., 2012, pp. 1097–1105 DOI: 10.5555/2999134.2999257
  12. “Attention is All You Need” In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17 Long Beach, California, USA: Curran Associates Inc., 2017, pp. 6000–6010 DOI: 10.5555/3295222.3295349
  13. Rajat Raina, Anand Madhavan and Andrew Y Ng “Large-Scale Deep Unsupervised Learning Using Graphics Processors” In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09 Montreal, Quebec, Canada: Association for Computing Machinery, 2009, pp. 873–880 DOI: 10.1145/1553374.15534
  14. “Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition”, 2010, pp. arXiv:1003.0358 DOI: 10.48550/arXiv.1003.0358
  15. Hyuk Lee and In Seok Kang “Neural Algorithm for Solving Differential Equations” In J. Comput. Phys. 91, 1990, pp. 110–131 DOI: 10.1016/0021-9991(90)90007-N
  16. M W M G Dissanayake and N Phan-Thien “Neural-network-based approximations for solving partial differential equations” In Commun. Numer. Meth. En. 10.3, 1994, pp. 195–201 DOI: 10.1002/cnm.1640100303
  17. I E Lagaris, A Likas and D I Fotiadis “Artificial neural networks for solving ordinary and partial differential equations.” In IEEE Trans. Neural Netw. 9.5, 1998, pp. 987–1000 DOI: 10.1109/72.712178
  18. M Raissi, P Perdikaris and G E Karniadakis “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations” In J. Comput. Phys. 378, 2019, pp. 686–707 DOI: 10.1016/j.jcp.2018.10.045
  19. “Physics-informed machine learning” In Nat. Rev. Phys. 3, 1970, pp. 422–440 DOI: 10.1038/s42254-021-00314-5
  20. “DeepXDE: A Deep Learning Library for Solving Differential Equations” In SIAM Rev. 63.1, 2021, pp. 208–228 DOI: 10.1137/19M1274067
  21. Lu Lu, Pengzhan Jin and George Em Karniadakis “DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators”, 2019, pp. arXiv:1910.03193 DOI: 10.48550/arXiv.1910.03193
  22. “Neural Operator: Graph Kernel Network for Partial Differential Equations”, 2020, pp. arXiv:2003.03485 DOI: 10.48550/arXiv.2003.03485
  23. “Fourier Neural Operator for Parametric Partial Differential Equations”, 2020, pp. arXiv:2010.08895 DOI: 10.48550/arXiv.2010.08895
  24. Shuhao Cao “Choose a Transformer: Fourier or Galerkin”, 2021, pp. arXiv:2105.14995 DOI: 10.48550/arXiv.2105.14995
  25. “Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators” In Nat. Mach. Intell. 3.3, 2021, pp. 218–229 DOI: 10.1038/s42256-021-00302-5
  26. “LordNet: Learning to Solve Parametric Partial Differential Equations without Simulated Data”, 2022, pp. arXiv:2206.09418 DOI: 10.48550/arXiv.2206.09418
  27. Sifan Wang, Hanwen Wang and Paris Perdikaris “Learning the solution operator of parametric partial differential equations with physics-informed DeepONets.” In Sci. Adv. 7.40, 2021, pp. eabi8605 DOI: 10.1126/sciadv.abi8605
  28. Kurt Hornik, Maxwell Stinchcombe and Halbert White “Multilayer feedforward networks are universal approximators” In Neural Netw. 2.5, 1989, pp. 359–366 DOI: 10.1016/0893-6080(89)90020-8
  29. Kurt Hornik “Approximation capabilities of multilayer feedforward networks” In Neural Netw. 4.1, 1991, pp. 251–257 DOI: 10.1016/0893-6080(91)90009-T
  30. Ding-Xuan Zhou “Universality of deep convolutional neural networks” In Appl. Comput. Harmon. A 48.2, 2020, pp. 787–794 DOI: 10.1016/j.acha.2019.06.004
  31. “Learning Neural PDE Solvers with Convergence Guarantees”, 2019, pp. arXiv:1906.01200 DOI: 10.48550/arXiv.1906.01200
  32. “Poisson CNN: Convolutional neural networks for the solution of the Poisson equation on a Cartesian mesh” In DCE 2, 2021, pp. e6 DOI: 10.1017/dce.2021.7
  33. “Solving Poisson’s Equation using Deep Learning in Particle Simulation of PN Junction” In 2019 Joint International Symposium on Electromagnetic Compatibility, Sapporo and Asia-Pacific International Symposium on Electromagnetic Compatibility (EMC Sapporo/APEMC), 2019, pp. 305–308 DOI: 10.23919/EMCTokyo.2019.8893758
  34. “Using neural networks to solve the 2D Poisson equation for electric field computation in plasma fluid simulations”, 2021, pp. arXiv:2109.13076 DOI: 10.48550/arXiv.2109.13076
  35. Olaf Ronneberger, Philipp Fischer and Thomas Brox “U-Net: Convolutional Networks for Biomedical Image Segmentation” In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 Springer International Publishing, 2015, pp. 234–241 DOI: 10.1007/978-3-319-24574-4˙28
  36. Aleksandar P Jovanović, Detlef Loffhagen and Markus M Becker “Streamer-surface interaction in an atmospheric pressure dielectric barrier discharge in argon” In Plasma Sources Sci. Technol. 31, 2022, pp. 04LT02 DOI: 10.1088/1361-6595/ac63df
  37. “Time-resolved evolution of micro-discharges, surface ionization waves and plasma propagation in a two-dimensional packed bed reactor” In Plasma Sources Sci. Technol. 27, 2018, pp. 085002 DOI: 10.1088/1361-6595/aad2c5
  38. “Computational study of simultaneous positive and negative streamer propagation in a twin surface dielectric barrier discharge via 2D PIC simulations” In Plasma Sources Sci. Technol. 30.7, 2021, pp. 075017 DOI: 10.1088/1361-6595/abf598
  39. “Continuum modelling of an asymmetric CCRF argon plasma reactor: Influence of higher excited states and sensitivity to model parameters” In Plasma Process Polym. 18.6, 2021, pp. 2000243 DOI: 10.1002/ppap.202000243
  40. M Shanker, M Y Hu and M S Hung “Effect of data standardization on neural network training” In Omega 24.4, 1996, pp. 385–397 DOI: 10.1016/0305-0483(96)00010-2
  41. “Efficient BackProp” In Neural Networks: Tricks of the Trade 1524, Lecture Notes in Computer Science Springer Berlin Heidelberg, 1998, pp. 9–50 DOI: 10.1007/3-540-49430-8˙2
  42. “GitHub - robbmcleod/pyfastnoisesimd: Python module wrapping C++ FastNoiseSIMD — github.com” [Accessed 31-May-2023], https://github.com/robbmcleod/pyfastnoisesimd
  43. “Array programming with NumPy.” In Nature 585.7825, 2020, pp. 357–362 DOI: 10.1038/s41586-020-2649-2
  44. “SciPy 1.0: fundamental algorithms for scientific computing in Python.” In Nat. Methods 17.3, 2020, pp. 261–272 DOI: 10.1038/s41592-019-0686-2
  45. “GitHub - numpy/numpy at v1.24.0 — github.com” [Accessed 31-May-2023], https://github.com/numpy/numpy/tree/v1.24.0
  46. “GitHub - scipy/scipy at v1.9.3 — github.com” [Accessed 31-May-2023], https://github.com/scipy/scipy/tree/v1.9.3
  47. “Image quality assessment: from error visibility to structural similarity.” In IEEE Trans. Image Process. 13.4, 2004, pp. 600–12 DOI: 10.1109/tip.2003.819861
  48. Clément Godard, Oisin Mac Aodha and Gabriel J Brostow “Unsupervised Monocular Depth Estimation with Left-Right Consistency”, 2016, pp. arXiv:1609.03677 DOI: 10.48550/arXiv.1609.03677
  49. “Digging Into Self-Supervised Monocular Depth Estimation”, 2018, pp. arXiv:1806.01260 DOI: 10.48550/arXiv.1806.01260
  50. “U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow” In Adv. Water Resour. 163, 2022, pp. 104180 DOI: 10.1016/j.advwatres.2022.104180
  51. “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”, 2020, pp. arXiv:2010.11929 DOI: 10.48550/arXiv.2010.11929
  52. “CoCa: Contrastive Captioners are Image-Text Foundation Models”, 2022, pp. arXiv:2205.01917 DOI: 10.48550/arXiv.2205.01917
  53. “Vision Transformer Adapter for Dense Predictions”, 2022, pp. arXiv:2205.08534 DOI: 10.48550/arXiv.2205.08534
  54. “Activating More Pixels in Image Super-Resolution Transformer”, 2022, pp. arXiv:2205.04437 DOI: 10.48550/arXiv.2205.04437
  55. “Revealing the Dark Secrets of Masked Image Modeling”, 2022, pp. arXiv:2205.13543 DOI: 10.48550/arXiv.2205.13543
  56. “TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation”, 2021, pp. arXiv:2102.04306 DOI: 10.48550/arXiv.2102.04306
  57. “Densely Connected Convolutional Networks” In 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2261–2269 DOI: 10.1109/CVPR.2017.243
  58. Kunihiko Fukushima “Cognitron: A self-organizing multilayered neural network” In Biol. Cybern. 20 Springer-Verlag, 1975, pp. 121–136 DOI: 10.1007/BF00342633
  59. Vinod Nair and Geoffrey E Hinton “Rectified Linear Units Improve Restricted Boltzmann Machines” In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10 Haifa, Israel: Omnipress, 2010, pp. 807–814 DOI: 10.5555/3104322.3104425
  60. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” In Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML’15 Lille, France: JMLR.org, 2015, pp. 448–456 DOI: 10.5555/3045118.3045167
  61. Diederik P. Kingma and Jimmy Ba “Adam: A Method for Stochastic Optimization”, 2014, pp. arXiv:1412.6980 DOI: 10.48550/arXiv.1412.6980
  62. “SGDR: Stochastic Gradient Descent with Warm Restarts”, 2016, pp. arXiv:1608.03983 DOI: 10.48550/arXiv.1608.03983
  63. “TensorFlow: A System for Large-Scale Machine Learning” In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16 Savannah, GA, USA: USENIX Association, 2016, pp. 265–283
  64. “GitHub - tensorflow/tensorflow at v2.11.0 — github.com” [Accessed 31-May-2023], https://github.com/tensorflow/tensorflow/tree/v2.11.0
  65. Guido Van Rossum and Fred L. Drake “Python 3 Reference Manual” Scotts Valley, CA: CreateSpace, 2009
  66. Stefano Markidis “The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers?” In Front. Big Data 4, 2021, pp. 669097 DOI: 10.3389/fdata.2021.669097
  67. “On the Spectral Bias of Neural Networks”, 2018, pp. arXiv:1806.08734 DOI: 10.48550/arXiv.1806.08734
  68. “Towards Understanding the Spectral Bias of Deep Learning”, 2019, pp. arXiv:1912.01198 DOI: 10.48550/arXiv.1912.01198
  69. Zhi-Qin John Xu, Yaoyu Zhang and Yanyang Xiao “Training Behavior of Deep Neural Network in Frequency Domain” In Neural Information Processing 11953, Lecture Notes in Computer Science, 2019, pp. 264–274 DOI: 10.1007/978-3-030-36708-4˙22
  70. “Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks”, 2019, pp. arXiv:1901.06523 DOI: 10.48550/arXiv.1901.06523
  71. “AmgX: A Library for GPU Accelerated Algebraic Multigrid and Preconditioned Iterative Methods” In SIAM J. Sci. Comput. 37.5, 2015, pp. S602–S626 DOI: 10.1137/140980260
  72. “GitHub - NVIDIA/AMGX: Distributed multigrid linear solver library on GPU — github.com” [Accessed 31-May-2023], https://github.com/NVIDIA/AMGX
  73. “GitHub - shwina/pyamgx: GPU accelerated multigrid library for Python — github.com” [Accessed 31-May-2023], https://github.com/shwina/pyamgx
  74. “Learning Mesh-Based Simulation with Graph Networks”, 2020, pp. arXiv:2010.03409 DOI: 10.48550/arXiv.2010.03409
  75. Winfried Lötzsch, Simon Ohler and Johannes S Otterbach “Learning the Solution Operator of Boundary Value Problems using Graph Neural Networks”, 2022, pp. arXiv:2206.14092 DOI: 10.48550/arXiv.2206.14092
Citations (2)

Summary

We haven't generated a summary for this paper yet.