A reinforcement learning strategy for p-adaptation in high order solvers (2306.08292v1)
Abstract: Reinforcement learning (RL) has emerged as a promising approach to automating decision processes. This paper explores the application of RL techniques to optimise the polynomial order in the computational mesh when using high-order solvers. Mesh adaptation plays a crucial role in improving the efficiency of numerical simulations by improving accuracy while reducing the cost. Here, actor-critic RL models based on Proximal Policy Optimization offer a data-driven approach for agents to learn optimal mesh modifications based on evolving conditions. The paper provides a strategy for p-adaptation in high-order solvers and includes insights into the main aspects of RL-based mesh adaptation, including the formulation of appropriate reward structures and the interaction between the RL agent and the simulation environment. We discuss the impact of RL-based mesh p-adaptation on computational efficiency and accuracy. We test the RL p-adaptation strategy on a 1D inviscid Burgers' equation to demonstrate the effectiveness of the strategy. The RL strategy reduces the computational cost and improves accuracy over uniform adaptation, while minimising human intervention.
- Unstructured h-and hp-adaptive strategies for discontinuous galerkin methods based on a posteriori error estimation for compressible flows. Computers & Fluids, 233:105245, 2022.
- Perspective on machine learning for advancing fluid mechanics. Physical Review Fluids, 4(10):100501, 2019.
- Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press, 2019.
- An hp-adaptive discontinuous galerkin solver for aerodynamic flows on mixed-element meshes. In 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, page 490, 2011.
- Improving aircraft performance using machine learning: a review. arXiv:2210.11481, 2022.
- Les of the flow past a circular cylinder using a multiscale discontinuous galerkin method. In 5th Internationnal conference on Turbulence and Interactions, TI 2018, 2018.
- Richard P Dwight. Goal-oriented mesh adaptation for finite volume methods using a dissipation-based error indicator. International journal for numerical methods in fluids, 56(8):1193–1200, 2008.
- Horses3d: A high-order discontinuous galerkin solver for flow simulations and multi-physics applications. Computer Physics Communications, 287:108700, 2023.
- Esteban Ferrer. An interior penalty stabilised incompressible discontinuous galerkin–fourier solver for implicit large eddy simulations. Journal of Computational Physics, 348:754–775, 2017.
- Swarm reinforcement learning for adaptive mesh refinement, 2023.
- A review on deep reinforcement learning for fluid mechanics. Computers & Fluids, 225:104973, 2021.
- Feature-driven cartesian adaptive mesh refinement for vortex-dominated flows. Journal of computational physics, 230(16):6271–6298, 2011.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- David A. Kopriva. Implementing Spectral Methods for Partial Differential Equations: Algorithms for Scientists and Engineers. Springer Science & Business Media, May 2009.
- Data-driven eigensolution analysis based on a spatio-temporal koopman decomposition, with applications to high-order methods. Journal of Computational Physics, 449:110798, 2022.
- Eigensolution analysis of immersed boundary method based on volume penalization: Applications to high-order schemes. Journal of Computational Physics, 449:110817, 2022.
- Jump penalty stabilisation techniques for under-resolved turbulence in discontinuous galerkin schemes. arXiv preprint arXiv:2208.11426, 2022.
- Hung-Chi Kuo and RT Williams. Semi-lagrangian solutions to the inviscid burgers equation. Monthly Weather Review, 118(6):1278–1288, 1990.
- A functional oriented truncation error adaptation method. J. Comput. Phys., 451:110883, 2022.
- A p-adaptive discontinuous galerkin method for compressible flows using charm++. In AIAA Scitech 2020 forum, page 1565, 2020.
- Mesh deep Q network: A deep reinforcement learning framework for improving meshes in computational fluid dynamics. AIP Advances, 13(1), 01 2023. 015026.
- Dispersion-dissipation analysis for advection problems with nonconstant coefficients: Applications to discontinuous galerkin formulations. SIAM Journal on Scientific Computing, 40(2):A747–A768, 2018.
- A comparison of refinement indicators for p-adaptive simulations of steady and unsteady flows using discontinuous Galerkin methods. Journal of Computational Physics, 376:508–533, January 2019.
- A free–energy stable p–adaptive nodal discontinuous galerkin for the cahn–hilliard equation. Journal of Computational Physics, 442:110409, 2021.
- An entropy–stable p–adaptive nodal discontinuous galerkin for the coupled navier–stokes/cahn–hilliard system. Journal of Computational Physics, 458:111093, 2022.
- Sub-Cell Shock Capturing for Discontinuous Galerkin Methods.
- A p-multigrid strategy with anisotropic p-adaptation based on truncation errors for high-order discontinuous galerkin methods. Journal of Computational Physics, 378:209–233, 2019.
- Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Tensorforce Team. Tensorforce. https://tensorforce.readthedocs.io/en/latest/.
- R. Vinuesa and S.L. Brunton. Enhancing computational fluid dynamics with machine learning. Nat Comput Sci, 2:358–366, 2022.
- Enhancing computational fluid dynamics with machine learning. Nature Computational Science, 2(6):358–366, 2022.
- A review on deep reinforcement learning for fluid mechanics: An update. Physics of Fluids, 34(11), 11 2022. 111301.
- Learning controllable adaptive simulation for multi-resolution physics. In The Eleventh International Conference on Learning Representations, 2023.
- Reinforcement learning for adaptive mesh refinement, 2023.
- Multi-agent reinforcement learning for adaptive mesh refinement. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’23, page 14–22, Richland, SC, 2023. International Foundation for Autonomous Agents and Multiagent Systems.