Model epistemic uncertainty in small‑data regimes for probabilistic ML in multi‑fidelity inference

Characterize and model epistemic uncertainty in small‑data regimes for probabilistic machine‑learning models used in Bayesian multi‑fidelity inverse analysis by determining suitable uncertainty representations when the behavior of the underlying high‑fidelity simulator is unknown.

Background

The proposed BMFIA framework relies on learning a probabilistic conditional between low‑ and high‑fidelity model outputs in settings with limited data. The authors highlight that a general challenge across probabilistic ML is properly handling epistemic uncertainty in such small‑data regimes, because the true behavior of the model and the appropriate uncertainty structure are not known.

Addressing this issue is essential for robust Bayesian inference, as mis‑specification of epistemic uncertainty can lead to either overconfident or overly diffuse posteriors, particularly when multi‑fidelity embeddings introduce information loss and modeling approximations.

References

An additional general challenge (which is present for all probabilistic machine learning approaches) is modeling the epistemic uncertainty of the small data regime, as it is generally unknown how the actual model behaves and how this uncertainty should be modeled.

Efficient Bayesian multi-fidelity inverse analysis for expensive and non-differentiable physics-based simulations in high stochastic dimensions (2505.24708 - Nitzler et al., 30 May 2025) in Section 2.6 (Error sources, information loss, and extreme cases of BMFIA)