Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extreme Event Prediction with Multi-agent Reinforcement Learning-based Parametrization of Atmospheric and Oceanic Turbulence (2312.00907v1)

Published 1 Dec 2023 in cs.LG, cs.CE, physics.ao-ph, physics.comp-ph, and physics.flu-dyn

Abstract: Global climate models (GCMs) are the main tools for understanding and predicting climate change. However, due to limited numerical resolutions, these models suffer from major structural uncertainties; e.g., they cannot resolve critical processes such as small-scale eddies in atmospheric and oceanic turbulence. Thus, such small-scale processes have to be represented as a function of the resolved scales via closures (parametrization). The accuracy of these closures is particularly important for capturing climate extremes. Traditionally, such closures are based on heuristics and simplifying assumptions about the unresolved physics. Recently, supervised-learned closures, trained offline on high-fidelity data, have been shown to outperform the classical physics-based closures. However, this approach requires a significant amount of high-fidelity training data and can also lead to instabilities. Reinforcement learning is emerging as a potent alternative for developing such closures as it requires only low-order statistics and leads to stable closures. In Scientific Multi-Agent Reinforcement Learning (SMARL) computational elements serve a dual role of discretization points and learning agents. We leverage SMARL and fundamentals of turbulence physics to learn closures for prototypes of atmospheric and oceanic turbulence. The policy is trained using only the enstrophy spectrum, which is nearly invariant and can be estimated from a few high-fidelity samples (these few samples are far from enough for supervised/offline learning). We show that these closures lead to stable low-resolution simulations that, at a fraction of the cost, can reproduce the high-fidelity simulations' statistics, including the tails of the probability density functions. The results demonstrate the high potential of SMARL for closure modeling for GCMs, especially in the regime of scarce data and indirect observations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Rupert Klein. Scale-dependent models for atmospheric flows. Annual Review of Fluid Mechanics, 42(1):249–274, 2010.
  2. Geoffrey K Vallis. Atmospheric and Oceanic Fluid Dynamics. Cambridge University Press, 2017.
  3. Earth system modeling 2.0: A blueprint for models that learn from observations and targeted high-resolution simulations. Geophysical Research Letters, 44(24):12–396, 2017.
  4. Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39):9684–9689, 2018.
  5. Resolving and parameterising the ocean mesoscale in earth system models. Current Climate Change Reports, pages 1–16, 2020.
  6. Energy budget-based backscatter in an eddy permitting primitive equation model. Ocean Modelling, 94:15–26, 2015. ISSN 1463-5003.
  7. Using machine learning to parameterize moist convection: Potential for modeling of climate, climate change, and extreme events. Journal of Advances in Modeling Earth Systems, 10(10):2548–2563, 2018.
  8. Stable a posteriori LES of 2D turbulence using convolutional neural networks: Backscattering analysis and generalization to higher Re via transfer learning. Journal of Computational Physics, 458:111090, 2022.
  9. Diffusion-based smoothers for spatial filtering of gridded geophysical data. Journal of Advances in Modeling Earth Systems, 13(9):e2021MS002552, 2021.
  10. Subgrid modelling for two-dimensional turbulence using neural networks. Journal of Fluid Mechanics, 858:122–144, 2019.
  11. Applications of deep learning to ocean data inference and subgrid parameterization. Journal of Advances in Modeling Earth Systems, 11(1):376–399, 2019.
  12. Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nature communications, 11(1):3295, 2020.
  13. Turbulence closure with small, local neural networks: Forced two-dimensional and β𝛽\betaitalic_β-plane flows. preprint arXiv:2304.05029, 2023.
  14. Explaining the physics of transfer learning a data-driven subgrid-scale closure to a different turbulent flow. arXiv preprint arXiv:2206.03198, 2022.
  15. Quantifying 3d gravity wave drag in a library of tropical convection-permitting simulations for data-driven parameterizations. Journal of Advances in Modeling Earth Systems, 15(5):e2022MS003585, 2023.
  16. Harnessing AI and computing to advance climate modelling and prediction. Nature Climate Change, 13(9):887–889, 2023.
  17. Learning physics-constrained subgrid-scale closures in the small-data regime for stable and accurate LES. Physica D: Nonlinear Phenomena, 443:133568, 2023. ISSN 0167-2789.
  18. Enforcing analytic constraints in neural networks emulating physical systems. Physical Review Letters, 126(9):098302, 2021a.
  19. Reliable coarse-grained turbulent simulations through combined offline learning and neural emulation. preprint arXiv:2307.13144, 2023.
  20. Explainable offline-online training of neural networks for parameterizations: A 1D gravity wave-QBO testbed in the small-data regime. preprint arXiv:2309.09024, 2023.
  21. Learning stochastic closures using ensemble Kalman inversion. Transactions of Mathematics and Its Applications, 5(1):tnab003, 2021.
  22. A posteriori learning for quasi-geostrophic turbulence parametrization. April, 2(3):6, 2022.
  23. Differentiable turbulence. preprint arXiv:2307.03683, 2023.
  24. Differentiable programming for earth system modeling. EGUsphere, 2022:1–17, 2022.
  25. Ensemble kalman inversion: a derivative-free technique for machine learning tasks. Inverse Problems, 35(9):095005, 2019.
  26. Calibration and uncertainty quantification of convective parameters in an idealized GCM. Journal of Advances in Modeling Earth Systems, 13(9):e2020MS002454, 2021.
  27. Imposing sparsity within ensemble Kalman inversion. arXiv preprint arXiv:2007.06175, 2020.
  28. Automating turbulence modelling by multi-agent reinforcement learning. Nature Machine Intelligence, 3(1):87–96, 2021a.
  29. Deep reinforcement learning for turbulence modeling in large eddy simulations. International Journal of Heat and Fluid Flow, 99:109094, 2023. ISSN 0142-727X.
  30. Scientific multi-agent reinforcement learning for wall-models of turbulent flows. Nature Communications, 13(1):1443, 2022.
  31. Deep reinforcement learning for large-eddy simulation modeling in wall-bounded turbulence. Physics of Fluids, 34(10), 2022.
  32. Mastering chess and Shogi by self-play with a general reinforcement learning algorithm. CoRR, abs/1712.01815, 2017.
  33. Superhuman AI for multiplayer poker. Science, 365(6456):885–890, 2019. ISSN 0036-8075.
  34. Dota 2 with Large Scale Deep Reinforcement Learning. arXiv e-prints, art. arXiv:1912.06680, December 2019.
  35. Korali: Efficient and scalable software framework for bayesian uncertainty quantification and stochastic optimization. Computer Methods in Applied Mechanics and Engineering, 389:114264, 2022. ISSN 0045-7825.
  36. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155–166, 2016.
  37. Joseph Smagorinsky. General circulation experiments with the primitive equations: I. the basic experiment. Monthly Weather Review, 91(3):99 – 164, 1963.
  38. C.E. Leith. Stochastic models of chaotic systems. Physica D: Nonlinear Phenomena, 98(2):481–491, 1996. ISSN 0167-2789. Nonlinear Phenomena in Ocean Dynamics.
  39. Physical invariance in neural networks for subgrid-scale scalar flux modeling. Physical Review Fluids, 6(2):024607, 2021.
  40. Blocking and its response to climate change. Current climate change reports, 4:287–300, 2018.
  41. Data-driven subgrid-scale modeling of forced Burgers turbulence using deep learning with generalization to higher Reynolds numbers via transfer learning. Physics of Fluids, 33(3):031702, 2021.
  42. Automating turbulence modelling by multi-agent reinforcement learning. Nature Machine Intelligence, pages 1–10, 2021b.
  43. Climate-invariant machine learning. arXiv preprint arXiv:2112.08440, 2021b.
  44. Are general circulation models obsolete? Proceedings of the National Academy of Sciences, 119(47):e2202075119, 2022.
  45. Invariant recurrent solutions embedded in a turbulent two-dimensional Kolmogorov flow. Journal of Fluid Mechanics, 722:554–595, 2013.
  46. Machine learning–accelerated computational fluid dynamics. Proceedings of the National Academy of Sciences, 118(21), 2021.
  47. Stephen B Pope. Turbulent Flows. IOP Publishing, 2001.
  48. Pierre Sagaut. Large eddy simulation for incompressible flows: An introduction. Springer Science & Business Media, 2006.
Citations (5)

Summary

We haven't generated a summary for this paper yet.