Gradient‑free optimization of non‑differentiable online error metrics in hybrid climate simulations

Develop effective gradient‑free optimization strategies to reduce non‑differentiable online error metrics in hybrid physics–machine learning climate simulations that couple neural network parameterizations with the E3SM‑MMF host model, moving beyond checkpoint searches to systematically improve online performance.

Background

The authors note that online errors in hybrid physics–ML climate simulations are non‑differentiable with respect to neural network parameters when coupled to a non‑differentiable, Fortran‑based host model, preventing direct gradient‑based optimization of multi‑step losses. As a result, they currently rely on searching across training checkpoints to identify models with better online behavior.

They suggest that gradient‑free methods (e.g., imitation learning and Ensemble Kalman Inversion) could be promising based on successes in related applications, but explicitly state that how to use such methods to optimize the online errors in this setting remains open.

References

It remains an open question how to use gradient-free methods to optimize these online errors.

Stable Machine-Learning Parameterization of Subgrid Processes in a Comprehensive Atmospheric Model Learned From Embedded Convection-Permitting Simulations (2407.00124 - Hu et al., 28 Jun 2024) in Discussion and Limitations, Subsection "Improving the Offline and Online Performance"