Failure Sensitivity Analysis
- Failure Sensitivity Analysis is a rigorous framework that quantifies how uncertain input variables affect system failure probability, enabling targeted risk reduction.
- It employs moment-independent sensitivity indices and distributional perturbations to rank input variables in both linear and non-linear systems with computational efficiency.
- Monte Carlo sampling and surrogate models enhance estimation of failure probabilities, offering actionable insights for design optimization in safety-critical environments.
Failure Sensitivity Analysis is a rigorous methodological framework for quantifying and comparing the influence of random input variables on the probability of system failure. Modern approaches focus on moment-independent measures and leverage distributional perturbations, offering ranking and interpretability across both linear and non-linear, black-box systems under uncertainty. The domain encompasses theoretical definitions, practical Monte Carlo and importance sampling algorithms, asymptotic properties, graphical representation for ranking, and application-oriented recommendations for uncertainty quantification, as exemplified by Sergienko et al. (Sergienko et al., 2013).
1. Mathematical and Statistical Foundations
Let be a vector of independent random inputs with joint density . Define a deterministic performance function , with the failure domain and failure probability
The principal aim of failure sensitivity analysis is to identify and rank input variables according to their contribution to the variability of , focusing especially on cases where can only be evaluated through computationally expensive simulations.
A moment-independent sensitivity index is derived by perturbing the marginal density of each while keeping other marginals fixed. Denote the perturbed density as , and the new joint as . The corresponding perturbed failure probability is , leading to the (relative) sensitivity index: or in Kullback-Leibler () divergence parameterization,
This construction is intrinsically moment-independent, aligning with distribution-perturbation paradigms for reliability analysis.
2. Local Sensitivity: Perturbative and Covariance-Based Indices
For infinitesimal perturbations, densities are parameterized via exponential tilting: , with a generating function (e.g., for mean shift). The first-order sensitivity, under Taylor expansion, is
This quantifies the ability of a perturbation in to shift the failure probability, moment-independently. For small , sensitivity scales as
emphasizing covariance between the failure indicator and the perturbation kernel.
3. Monte Carlo Algorithms and Variance-Reduction
A practical implementation relies on fast Monte Carlo or surrogate-accelerated sampling. For i.i.d. samples , one estimates:
- Reference failure probability: .
- Perturbed probability for variable : , where .
- Sensitivity index estimate: .
The entire procedure can be accomplished with a single base sample, allowing all variable indices to be computed without extra evaluations. Importance sampling, control variates, and surrogate models (e.g., Gaussian process, kriging) are recommended for variance reduction in cases of rare-event probabilities or expensive simulators.
4. Interpretation, Ranking, and Plausibility
Sensitivity indices are interpreted as follows:
- Positive : the perturbation increases the probability of failure.
- Negative : the perturbation decreases the probability of failure.
- The magnitude reflects the relative influence of on for the specific perturbation form.
Variables are ranked by . Practical thresholds for "significant influence" typically correspond to exceeding 0.05–0.10, adjustable per application domain. For design-of-experiments, perturbation magnitudes are selected in relation to expert uncertainty in input characterization, commonly for KL-divergence.
5. Case Study: CO₂ Storage Reliability
In Sergienko et al. (Sergienko et al., 2013), the method is applied to CO₂ geostorage risk:
- Inputs: PORO truncated over (porosity), KSAND, KRSAND (permeability, water permeability endpoint).
- Failure: .
- Reference probability (via kriging surrogate).
- Sensitivity analysis:
- Positive mean shift: PORO markedly increases ; KSAND/KRSAND decrease .
- Negative mean shift: KSAND/KRSAND drive adverse (increased) failure, as system permeability declines.
- Boundary-shift perturbations provide insight into safe operation intervals (e.g., upward shift of PORO's bound ultimately drives ).
Recommendations are to characterize with high precision those with large , while variables with negligible suggest reduced need for stringent uncertainty quantification. Both mean-shift and boundary-shift analyses illuminate different facets of the failure's sensitivity to uncertain input distributions.
6. Application Guidance and Generality
This density-perturbation method is broadly applicable to systems requiring reliability sensitivity analysis under arbitrary input uncertainties, with minimal computational cost beyond standard Monte Carlo. It is suitable for systems with complex, computationally expensive performance functions, and general both for moment-independent as well as distributional sensitivity. Ranking inputs by their sensitivity indices guides both variable prioritization and resource allocation for characterization in safety-critical engineering and environmental risk contexts.
The framework allows for:
- Simultaneous exploration of multiple uncertainty sources.
- Intuitive, interpretable graphical representation ( curves) for decision support.
- Efficient, single-run computation for all input sensitivity indices.
In summary, probability-distribution perturbation sensitivity analysis delivers systematic, moment-independent quantification and ranking of input uncertainties on failure probability, supporting risk-informed design and targeted uncertainty reduction in high-stakes applications (Sergienko et al., 2013).