Bayesian Uncertainty Quantification Framework
- Bayesian Uncertainty Quantification (UQ) framework is a probabilistic approach for model calibration and selection that uses Bayes' theorem to characterize and propagate both parameter and model-form uncertainties.
- It employs MCMC sampling and marginal likelihood estimation to rigorously infer posterior distributions from data while addressing Gaussian error assumptions.
- The framework integrates Bayesian model averaging with error-correlation-based fusion to deliver refined predictive intervals and robust model discrimination.
A Bayesian Uncertainty Quantification (UQ) framework is a structured, probabilistic approach for characterizing, propagating, and combining uncertainties in the calibration, selection, and fusion of models. Central to these frameworks is the application of Bayes' theorem to infer distributions over model parameters, enable rigorous model discrimination via marginal likelihoods, and optimally combine predictive information from competing models, taking into account both parameter and model-form uncertainty. These methodologies are applicable to any parameterized forward modeling context under Gaussian error assumptions and have been extensively applied to CALPHAD-based thermodynamic modeling, inverse problems, and scientific computing (Honarmandi et al., 2018).
1. Bayesian Inference Formulation
The Bayesian UQ framework begins with the specification of a forward model parameterized by a vector to be calibrated against data . A prior distribution encodes pre-existing knowledge about , typically derived from deterministic best-fit optimization (e.g., Thermo-Calc/PARROT) as with plausible ranges . Priors may be:
- Uniform:
- Multivariate normal: , with , chosen so that spans plausible ranges
The likelihood assumes independent Gaussian errors on outputs , modeled as:
When is unknown, a hyperprior is used, and is included within .
By Bayes' rule:
This establishes the posterior distribution over model parameters, which encodes all quantifiable uncertainty conditioned on the data (Honarmandi et al., 2018).
2. MCMC Sampling and Posterior Estimation
Posterior sampling is performed via the Metropolis–Hastings (MH) algorithm:
- Given current state , propose candidate , usually Gaussian
- Compute the acceptance ratio:
- Accept with probability , else set
After burn-in, posterior samples are used to compute expectations, variances, and uncertainty bands for both parameters and forward model predictions (e.g., phase diagram boundaries).
3. Bayesian Model Selection via Marginal Likelihoods
For candidate models , each with parameter and prior , model evidence (marginal likelihood) is:
This is approximated by the harmonic-mean estimator:
Bayes factors for model discrimination are:
Posterior model probabilities:
This formalism allows robust selection or weighting of models based on information present in the data (Honarmandi et al., 2018).
4. Bayesian Model Averaging and Predictive Mixing
With posterior model weights , prediction for any quantity of interest is averaged:
Posterior predictive intervals thus become a weighted mixture of the model-specific predictions, delivering UQ that reflects both parameter and model-form uncertainty.
5. Error-Correlation-Based Model Fusion
Where model errors are not independent, fused predictions are preferable to simple averaging. At fixed input , each model yields ; error correlations are summarized in with off-diagonal elements . Fused mean and variance are:
Pairwise can be estimated via reification and weighted averaging:
Error-correlation fusion sharpens prediction by accounting for shared uncertainties (Honarmandi et al., 2018).
6. Unified UQ Workflow
A standard Bayesian UQ workflow—applicable to any parameterized model with Gaussian residuals—proceeds with the following steps:
- Compute deterministic best-fit and parameter ranges via optimization.
- Define parameter priors for each model around best-fit.
- Specify likelihood with Gaussian (optionally hierarchical) errors.
- For each candidate model: a. Sample the posterior via Metropolis–Hastings. b. Propagate parameter uncertainty to the predictive distribution. c. Estimate marginal likelihood/evidence via harmonic mean.
- Calculate Bayes factors and model weights.
- Perform Bayesian model averaging.
- If models are correlated, estimate error correlations and perform fused prediction.
This generic scheme supports rigorous, model-agnostic uncertainty quantification in forward modeling scenarios (Honarmandi et al., 2018).
7. Generalization and Applicability
The full suite of Bayesian UQ, model selection, and fusion formulas is applicable wherever model parameters are uncertain, measurement noise is approximately Gaussian, and multiple plausible model structures exist, including possibly correlated model errors. This framework provides detailed traceable uncertainty propagation and robust model combination across diverse domains including phase diagram assessment, scientific computing, and predictive design.
References:
Honarmandi et al., "Bayesian Uncertainty Quantification and Information Fusion in CALPHAD-based Thermodynamic Modeling" (Honarmandi et al., 2018).