Papers
Topics
Authors
Recent
2000 character limit reached

Global Sensitivity Characterization

Updated 24 November 2025
  • Global sensitivity characterization is a framework that decomposes model output variance into contributions from individual inputs and their interactions using indices like Sobol and Borgonovo’s δ-index.
  • It employs methodologies such as Monte Carlo sampling, surrogate models (e.g., polynomial chaos, Gaussian processes), and derivative-based approaches to efficiently estimate sensitivity indices.
  • Applications span engineering, risk assessment, and scientific modeling, offering actionable insights for model calibration, dimension reduction, and uncertainty management.

Global sensitivity characterization is the systematic quantification and allocation of uncertainty in mathematical or computational models to sources of uncertainty in model inputs, assessed globally over the entire input space. It provides rigorous, quantitative indices that capture main effects, interactions, and higher-order dependencies across random and uncertain parameters. Modern global sensitivity analysis (GSA) underpins robust uncertainty quantification, aids dimension reduction, and serves as a central tool in modeling workflows across engineering, statistics, machine learning, and the physical sciences.

1. Foundational Principles and Indices

The canonical framework for global sensitivity characterization is the variance-based decomposition developed by Sobol, which is anchored in the functional ANOVA expansion. For a square-integrable scalar output Y=f(X)Y = f(\mathbf{X}), with X=(X1,…,Xp)\mathbf{X} = (X_1,\dots,X_p) a random vector of independent inputs, ff can be decomposed uniquely as

f(X)=f0+∑ifi(Xi)+∑i<jfij(Xi,Xj)+⋯+f1…p(X1,...,Xp)f(\mathbf{X}) = f_0 + \sum_{i} f_i(X_i) + \sum_{i<j} f_{ij}(X_i,X_j) + \cdots + f_{1\dots p}(X_1,...,X_p)

where the summands are orthogonal in L2L^2 and have zero mean. The output variance decomposes as

Var[Y]=∑iVar[fi]+∑i<jVar[fij]+⋯\mathrm{Var}[Y] = \sum_{i} \mathrm{Var}[f_i] + \sum_{i<j} \mathrm{Var}[f_{ij}] + \cdots

The first-order Sobol index for XiX_i is Si=Var[fi]/Var[Y]S_i = \mathrm{Var}[f_i]/\mathrm{Var}[Y], while the total-effect index for XiX_i,

STi=1−Var[E(Y∣X∼i)]Var[Y]S_{T_i} = 1 - \frac{\mathrm{Var}[\mathbb{E}(Y|X_{\sim i})]}{\mathrm{Var}[Y]}

captures all main and interaction effects involving XiX_i. For dependent inputs, extended formulations are required, distinguishing "intrinsic" from "bouncing" (correlation-mediated) effects (Bénesse et al., 2021).

Moment-independent, density-based, and information-theoretic indices also play a central role. For instance, the Borgonovo δ\delta-index measures L1L^1 distance between marginal and conditional output densities, and ff-sensitivity indices generalize this to any chosen ff-divergence, naturally encompassing mutual information and total variation as special cases (Rahman, 2015, Francom et al., 13 Jun 2025).

2. Methodologies for Global Sensitivity Characterization

A variety of computation strategies exist for estimating sensitivity indices in practice:

Monte Carlo Approaches:

  • The "pick-freeze" (Saltelli) estimator and related procedures generate paired samples to estimate first- and total-order indices. For deterministic codes, this provides unbiased estimates; for stochastic models (e.g., MC solvers), variance deconvolution must be applied to separate parametric from solver noise contributions (Clements et al., 10 Mar 2024, Francom et al., 13 Jun 2025).
  • Double-loop Monte Carlo is used for quantile-based indices, evaluating the sensitivity of specific output quantiles as functions of random inputs (Kucherenko et al., 2016).

Surrogate Model-based Approaches:

  • Polynomial chaos expansions (PCE) and Gaussian process (GP) emulators are frequently used to address the computational cost of large or complex models (Gratiet et al., 2016, Robbe et al., 2023). Sensitivity indices follow analytically from the PCE coefficients due to orthogonality: for normalized basis polynomials,

Si=∑α:αi>0,αj≠i=0cα2∑α≠0cα2S_i = \frac{\sum_{\alpha:\alpha_i>0, \alpha_{j\neq i}=0} c_\alpha^2}{\sum_{\alpha\neq 0} c_\alpha^2}

and

STi=∑α:αi>0cα2∑α≠0cα2S_{T_i} = \frac{\sum_{\alpha:\alpha_i>0} c_\alpha^2}{\sum_{\alpha\neq 0} c_\alpha^2}

  • Statistical guarantees, e.g., confidence intervals on indices, are available in the GP framework by repeated posterior sampling (Gratiet et al., 2016).
  • Compressive sensing and adaptive basis schemes allow tractable surrogates even in moderate to high-dimensional regimes (Robbe et al., 2023, Merritt et al., 2021).

Derivative and Gradient-based Approaches:

  • Derivative-based global sensitivity measures (DGSM)

νi=E[(∂f∂xi)2]\nu_i = \mathbb{E}\left[\left(\frac{\partial f}{\partial x_i}\right)^2\right]

provide upper bounds on total-effect Sobol indices and are computationally lightweight for smooth models (Constantine et al., 2015, Francom et al., 13 Jun 2025).

  • Active subspace and input-warping analyses extract key eigendirections of the gradient covariance, producing "activity scores" that reveal globally influential input combinations (Constantine et al., 2015, Wycoff et al., 2021).

Novel Paradigms and Generalizations:

  • A new factorial-experiment-based paradigm recovers not only variance but any user-defined measure of importance (e.g., contrast functions, custom divergences) and produces weighted factorial effects encompassing Sobol, Shapley, and Cramér–von Mises indices as special cases (Mazo, 10 Sep 2024).

Advanced Model Types:

  • For models with stochastic outputs, functional responses, or probability law–valued outputs (e.g., distributional outputs), extended indices based on Wasserstein space Fréchet means or functional ANOVAs are deployed (Fort et al., 2020, Fontana et al., 2020).
  • For Bayesian networks and probabilistic graphical models, variance-based sensitivity indices can be computed by encoding uncertain parameters as additional random variables and performing tensor network–based marginalization, enabling exact global characterization even in discrete structured models (Ballester-Ripoll et al., 9 Jun 2024, Ballester-Ripoll et al., 2021).

3. Extensions, Specializations, and Alternative Indices

Recent research has addressed characterizing sensitivities:

  • Rare event probabilities: Efficient double-loop designs couple subset simulation (to estimate rare event probabilities) with sparse PCEs to render the computation of Sobol' indices feasible for problems dominated by tail-event likelihoods. This approach achieves significant computational gains over standard methods while maintaining accuracy (Merritt et al., 2021).
  • Quantile-based sensitivity: Quantile-based indices generalize the variance-based paradigm to measure which inputs most influence a given quantile of output, capturing tail sensitivities critical in risk and reliability applications (Kucherenko et al., 2016).
  • Functional outputs: For models producing time series or curves, finite-change functional sensitivity indices and domain-selective testing with functional ANOVA generalize GSA to infinite-dimensional outputs (Fontana et al., 2020).
  • Density-based/moment-independent indices: Borgonovo's δ\delta-index and ff-sensitivity indices capture input influence structure beyond mean and variance, sensitive to higher-order effects and distributional tails (Rahman, 2015, Francom et al., 13 Jun 2025).
  • Active subspaces and input warping: Eigenanalysis of the gradient covariance enables summary of global structure and pre-processing (input whitening) to enhance local surrogate performance and facilitate dimension reduction (Constantine et al., 2015, Wycoff et al., 2021).
  • Statistical models: Loss-function-driven GSA, based on MCMC sampling of the posterior or Gibbs-type densities over parameters, handles correlation, stochastic outputs, and multivariate or hierarchical models. Sensitivity is naturally interpreted as the population-averaged size of loss-function gradients (Hart et al., 2017).

4. Implementation, Computational Cost, and Best Practices

Practical implementation must balance statistical rigor, computational resources, and the specifics of the underlying model:

  • Monte Carlo sampling is the generic workhorse but may be prohibitive for high-dimensional or expensive models; stratified sampling (Latin Hypercube, Sobol sequences) improves convergence.
  • Surrogate models—PCEs are efficient and analytic for low to moderate dimensions, with sparse regression and basis adaptation extending practical limits; GPs also offer uncertainty quantification but scale cubically with training size (Gratiet et al., 2016, Robbe et al., 2023).
  • Variance deconvolution is imperative for Monte Carlo-based solvers with stochastic output, correcting for solver-induced noise at negligible extra cost (Clements et al., 10 Mar 2024).
  • Functional and quantile-based indices require nested or structured sampling and functional ANOVA for output spaces of nontrivial dimension (Kucherenko et al., 2016, Fontana et al., 2020).
  • Advanced software and workflow integration: R (sensitivity), Python (SALib, GPflow), and specialized tensor network toolkits are available; emulator-based approaches should routinely be cross-validated and paired with convergence diagnostics (Francom et al., 13 Jun 2025).
  • Preferred pipelines: Screening (Morris or local effects) for large pp and limited runs, then surrogate modeling and comprehensive GSA, finally visualization (main-effect/ALE plots, Shapley value estimation) (Francom et al., 13 Jun 2025).

5. Applications and Empirical Case Studies

Global sensitivity characterization is central to robust model development, calibration, and validation in fields such as engineering (fusion plasma modeling (Robbe et al., 2023)), reliability (rare event probabilities (Merritt et al., 2021)), environmental modeling, and risk assessment (structural safety quantiles (Kucherenko et al., 2016), climate-economy ensemble analysis (Fontana et al., 2020)).

Key empirical findings include:

  • PCE-based indices closely recover truth in complex coupled multiphysics models with far fewer model runs than MC (Robbe et al., 2023).
  • Rare event GSA shows that interaction among hyper-parameters is negligible in some settings, justifying model simplification and focused calibration (Merritt et al., 2021).
  • In Bayesian networks, exact tensor network GSA reveals importance patterns and higher-order interactions entirely absent in OAT ranking, with orders-of-magnitude computational gains (Ballester-Ripoll et al., 9 Jun 2024).
  • Functional GSA with domain-selective inferential procedures isolates time-resolved patterns of sensitivity, solving the "where and when is an input important?" problem for policy-relevant outputs (Fontana et al., 2020).

6. Theoretical Developments and Outlook

Global sensitivity characterization continues to evolve:

  • Recent approaches generalize classical ANOVA-based indices to arbitrary divergence measures, arbitrary input dependence, user-defined variability metrics, and weighted factorial decompositions, enabling a unified framework spanning Sobol, Shapley, and moment-independent indices (Mazo, 10 Sep 2024).
  • Extensions to stochastic outputs, functional data, and uncertainty in input distributions are now available, with statistical theory guaranteeing estimator consistency, rates, and robustness under regularity conditions (Fort et al., 2020, Rahman, 2015).
  • Open research frontiers include systematic handling of correlated hyper-parameters, extension to vector outputs and quantile-based quantities of interest, and theoretical links between surrogate error and the accuracy of sensitivity indices (Merritt et al., 2021, Mazo, 10 Sep 2024). Tighter error quantification, scalable algorithms for extremely large pp, and dynamic/adaptive GSA schemes (e.g., dynamic sensitivity for privacy-aware bandit algorithms (Wang et al., 2022)) are active topics.

Global sensitivity characterization thus provides a comprehensive and extensible foundation for rigorous uncertainty attribution in complex systems modeling, supporting both theoretical analysis and large-scale applied computational pipelines.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Global Sensitivity Characterization.