Uncertainty quantification and interpretability for ML surrogates in chaotic dynamics

Develop and validate uncertainty quantification and interpretability methods for machine learning surrogates that predict basin metrics and safety functions in chaotic dynamical systems, providing calibrated confidence estimates and identifying the phase-space regions that drive model predictions to ensure reliable scientific and engineering use.

Background

The paper surveys recent progress using machine learning to analyze basins of attraction and to compute safety functions for partial control in chaotic systems. While ML surrogates can accelerate these computations, the authors emphasize that trustworthy deployment requires principled ways to quantify uncertainty and to interpret model predictions.

In the Open Problems section, the authors explicitly identify uncertainty quantification and interpretability as an open problem, noting the practical need to attach confidence to predicted basin metrics or safety functions and to understand which regions of phase space influence the surrogate’s outputs. They point to Bayesian or ensemble approaches and feature attribution methods as potential directions.

References

Another open problem is uncertainty quantification and interpretability in ML surrogates.

From Basins to safe sets: a machine learning perspective on chaotic dynamics  (2601.21510 - Valle et al., 29 Jan 2026) in Section Open Problems