Papers
Topics
Authors
Recent
2000 character limit reached

Model Sensitivity to Uncertainty

Updated 4 December 2025
  • Model Sensitivity to Uncertainty (MSU) is a framework that defines and quantifies how uncertainties in parameters, inputs, and model structures affect prediction outputs.
  • It employs variance-based decompositions, information-theoretic measures, and local sensitivity indices to rigorously attribute uncertainty in complex systems.
  • Scalable algorithmic strategies and adaptive surrogate models make MSU applicable for high-dimensional and computationally intensive problems across various disciplines.

Model Sensitivity to Uncertainty (MSU) formalizes how a model’s predictions or functionals respond to uncertainty in model parameters, input data, or model structure. Across contemporary applied mathematics, engineering, and machine learning, MSU encompasses a range of tools: information-theoretic decompositions, variance-based global sensitivity indices, local and structural measures in complex systems, and robust algorithmic workflows. These frameworks enable researchers to quantify, decompose, and interpret the propagation of uncertainty through models, rigorously identifying which aspects—parameters, structural assumptions, input points, or qualitative modeling choices—most critically affect uncertainty in quantities of interest.

1. Core Mathematical Principles of MSU

The foundational concept is to partition uncertainty in model outputs according to sources in input parameters, data, or structure. For deterministic mappings f:Rd→Rf: \mathbb{R}^d \to \mathbb{R} with uncertain inputs, classical variance-based ANOVA/Sobol decompositions are central: the total variance V=Var[f(X)]V = \mathrm{Var}[f(X)] is decomposed into contributions from each input and from interactions (Dimov et al., 2017). Each first-order Sobol index Si=VarXi[E[Y∣Xi]]/VS_i = \mathrm{Var}_{X_i}[\mathbb{E}[Y|X_i]]/V quantifies the fraction of output variance explained by input XiX_i, while total-effect indices aggregate all effects involving XiX_i.

For models with stochastic dynamics or epistemic/aleatoric uncertainty in predictions, information-theoretic measures become primary. The predictive uncertainty is decomposed into aleatoric (irreducible data noise) and epistemic (knowledge about parameters or structure) uncertainty. In Bayesian settings, epistemic uncertainty is captured through conditional mutual information, quantifying how much observing a new data point reduces uncertainty about future predictions (Futami et al., 2023).

2. Variance-based Sensitivity Analysis and Multivariate Extensions

Variance-based global sensitivity analysis remains a cornerstone of MSU. For high-dimensional or expensive simulators, adaptively refined surrogates—such as sensitivity-driven sparse grids—enable tractable estimation of Sobol indices and output statistics with drastically fewer model evaluations (Farcas et al., 2022, Huan et al., 2017).

In multivariate discrete settings, Multivariate Symmetrical Uncertainty (MSU) extends mutual information normalization to tuples of features by leveraging total correlation, accommodating dependencies beyond pairwise interaction. The MSU measure helps to robustly select informative feature subsets for classification when appropriately accounting for cardinality bias and sampling effects (Sosa-Cabrera et al., 2017).

3. Local Sensitivity Measures and Uncertainty-aware Derivatives

For nonlinear supervised learning models or Bayesian neural networks, MSU methodologies generalize derivative-based indices to the predictive distribution rather than point estimates (Paananen et al., 2019, Depeweg et al., 2017). For a Bayesian predictive distribution p(y∣x)p(y|\mathbf{x}), the R-sens sensitivity index replaces the gradient of the mean output with a Fisher information-weighted derivative of the predictive parameter vector, yielding a local sensitivity measure that is automatically dampened in regions of high epistemic uncertainty. Interaction sensitivities are similarly constructed from second mixed derivatives. This resolves sensitivity rankings in both main and interaction effects, robust even under substantial uncertainty.

For Bayesian neural networks with latent variables, sensitivities for both epistemic (due to weight uncertainty) and aleatoric (due to latent or intrinsic stochasticity) uncertainty components are obtained by differentiating the respective decomposed variance terms with respect to inputs, computed efficiently via Monte Carlo and backpropagation (Depeweg et al., 2017).

4. Model-form Sensitivity and Structural Uncertainty Ranking

For complex physical models with explicit model-form uncertainty (MFU), MSU is operationalized by embedding parametric or functional corrections into model equations and analyzing their propagated effects via variance-based global sensitivity indices—both at the parameter and grouped structural-assumption level. This approach provides a systematic ranking of which modeling assumptions or submodels (e.g., constitutive relationships, closure schemes) dominate prediction uncertainty (Portone et al., 10 Sep 2025). Importantly, the grouping of parameters allows one to assess the impact of entire blocks of assumptions, and the framework remains robust to parameterization and calibration-induced dependencies.

5. Algorithmic and Computational Strategies for MSU

MSU analysis across large-scale or computationally expensive problems demands scalable algorithms:

  • Vectorized Uncertainty Propagation (VUP) and Input Probability Sensitivity Analysis (IPSA): Efficient GPU-based mapping of input probability distributions to output distributions, facilitating high-throughput evaluation of sensitivity measures (variance, tail probabilities, etc.), with costs that scale sublinearly in the number of scenarios (Vanslette et al., 2019).
  • Monte Carlo, Quasi-Monte Carlo, and Symmetrised Shaking: For high-dimensional integrals in PDE-based models, symmetrised shaking of Sobol sequences attains optimal convergence rates for smooth integrands, furnishing efficient, rigorously bounded estimation of ANOVA-based sensitivity indices (Dimov et al., 2017).
  • Hierarchical and Adaptive Surrogates: Sensitivity-driven, dimension-adaptive sparse grids adaptively refine in directions indicated by provisional Sobol index estimates, massively reducing the total simulation burden while guaranteeing accurate UQ and sensitivity outputs (Farcas et al., 2022).

6. Information-theoretic Analysis and Epistemic Sensitivity

Recent work formalizes MSU via information-theoretic decompositions. For Bayesian inference, the minimum excess risk due to epistemic uncertainty admits an exact decomposition into conditional mutual informations between test and training data. These test–train MSU terms directly quantify how proximity to observed data locally reduces epistemic uncertainty in predictions, with explicit asymptotics and connection to geometric similarity in linear models (Futami et al., 2023).

In Bayesian meta-learning, analogous decompositions measure sensitivity between tasks, capturing how much each meta-training task reduces epistemic uncertainty in a meta-test task. These are the basis for principled assessment of transfer and generalization risk across task distributions.

7. Sensitivity in Stochastic, Algorithmic, and Model-Selection Contexts

Generalizing to random measures and fields, MSU can be defined for risk functionals of stochastic systems through random-measure ANOVA decompositions. This permits attribution of uncertainty contributions to measurement noise, parameter sample size, or model selection variance, with structural and correlative indices quantifying direct and interaction effects, and functional PCA of random fields ranking uncertainty by principal modes (Bastian et al., 2020).

In dynamic microsimulation, MSU quantifies the variance contributions of Monte Carlo noise, parameter uncertainty, and—most crucially—qualitative modeling choices (model type, covariates, scenario, or calibration strategy), highlighting that qualitative structural uncertainties frequently dominate parametric sources (Dumont et al., 18 Nov 2025).

8. Applications and Practical Guidance

MSU frameworks inform feature selection under data uncertainty (Sosa-Cabrera et al., 2017), prioritization of modeling or experimental resources in physics-based models (Portone et al., 10 Sep 2025), risk assessment and decision robustness in policy modeling (Wadekar et al., 23 Apr 2025), and epistemic reliability of LLMs in the context of clinical uncertainty (Sridhar et al., 27 Nov 2025). Across domains, recommended best practices include:

  • Employing variance-based indices (first-order, total-effect) to identify and, when justified by small indices, fix or eliminate unimportant parameters (Nikishova et al., 2019, Dimov et al., 2017).
  • Rigorously accounting for sample size, cardinality, and model assumptions to avoid spurious dependency signals in multivariate settings (Sosa-Cabrera et al., 2017).
  • Leveraging information-theoretic metrics to inform data acquisition by targeting samples that most reduce model epistemic uncertainty (Futami et al., 2023).

MSU thus provides a unifying framework—mathematically rigorous, scalable, and interpretable—for quantifying and managing the impact of uncertainty in complex models across scientific and engineering disciplines.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Model Sensitivity to Uncertainty (MSU).