Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bayesian Uncertainty Quantification Framework

Updated 22 January 2026
  • Bayesian Uncertainty Quantification (UQ) framework is a probabilistic approach for model calibration and selection that uses Bayes' theorem to characterize and propagate both parameter and model-form uncertainties.
  • It employs MCMC sampling and marginal likelihood estimation to rigorously infer posterior distributions from data while addressing Gaussian error assumptions.
  • The framework integrates Bayesian model averaging with error-correlation-based fusion to deliver refined predictive intervals and robust model discrimination.

A Bayesian Uncertainty Quantification (UQ) framework is a structured, probabilistic approach for characterizing, propagating, and combining uncertainties in the calibration, selection, and fusion of models. Central to these frameworks is the application of Bayes' theorem to infer distributions over model parameters, enable rigorous model discrimination via marginal likelihoods, and optimally combine predictive information from competing models, taking into account both parameter and model-form uncertainty. These methodologies are applicable to any parameterized forward modeling context under Gaussian error assumptions and have been extensively applied to CALPHAD-based thermodynamic modeling, inverse problems, and scientific computing (Honarmandi et al., 2018).

1. Bayesian Inference Formulation

The Bayesian UQ framework begins with the specification of a forward model parameterized by a vector θRd\theta \in \mathbb{R}^d to be calibrated against data DD. A prior distribution p(θ)p(\theta) encodes pre-existing knowledge about θ\theta, typically derived from deterministic best-fit optimization (e.g., Thermo-Calc/PARROT) as θdet\theta^{\mathrm{det}} with plausible ranges [θidet±Δi][\theta^{\mathrm{det}}_i \pm \Delta_i]. Priors may be:

  • Uniform: p(θ)i=1d1[θidetΔi,  θidet+Δi](θi)p(\theta) \propto \prod_{i=1}^d \mathbf{1}_{[\theta^{\mathrm{det}}_i - \Delta_i,\; \theta^{\mathrm{det}}_i + \Delta_i]}(\theta_i)
  • Multivariate normal: θN(μ0,Σ0)\theta \sim N(\mu_0, \Sigma_0), with μ0=θdet\mu_0 = \theta^{\mathrm{det}}, Σ0\Sigma_0 chosen so that 3σ3\sigma spans plausible ranges

The likelihood p(Dθ)p(D \mid \theta) assumes independent Gaussian errors on outputs yjobsy_j^{\mathrm{obs}}, modeled as:

p(Dθ)=j=1N12πσ2exp((yjobsyj(θ))22σ2)p(D \mid \theta) = \prod_{j=1}^N \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left( -\frac{(y_j^{\mathrm{obs}} - y_j(\theta))^2}{2\sigma^2} \right)

When σ2\sigma^2 is unknown, a hyperprior σ2IG(α,β)\sigma^2 \sim \mathrm{IG}(\alpha,\beta) is used, and σ2\sigma^2 is included within θ\theta.

By Bayes' rule:

p(θD)p(Dθ)p(θ)p(\theta|D) \propto p(D|\theta) p(\theta)

This establishes the posterior distribution over model parameters, which encodes all quantifiable uncertainty conditioned on the data (Honarmandi et al., 2018).

2. MCMC Sampling and Posterior Estimation

Posterior sampling is performed via the Metropolis–Hastings (MH) algorithm:

  1. Given current state θ(i)\theta^{(i)}, propose candidate θcandq(θθ(i))\theta^{\mathrm{cand}} \sim q(\theta | \theta^{(i)}), usually Gaussian
  2. Compute the acceptance ratio:

r=p(θcand)p(Dθcand)p(θ(i))p(Dθ(i))q(θ(i)θcand)q(θcandθ(i))r = \frac{p(\theta^{\mathrm{cand}})p(D|\theta^{\mathrm{cand}})}{p(\theta^{(i)})p(D|\theta^{(i)})} \frac{q(\theta^{(i)}|\theta^{\mathrm{cand}})}{q(\theta^{\mathrm{cand}}|\theta^{(i)})}

  1. Accept θ(i+1)=θcand\theta^{(i+1)} = \theta^{\mathrm{cand}} with probability α=min(1,r)\alpha = \min(1, r), else set θ(i+1)=θ(i)\theta^{(i+1)} = \theta^{(i)}

After burn-in, posterior samples {θ(i)}\{\theta^{(i)}\} are used to compute expectations, variances, and uncertainty bands for both parameters and forward model predictions (e.g., phase diagram boundaries).

3. Bayesian Model Selection via Marginal Likelihoods

For KK candidate models {Mk}\{M_k\}, each with parameter θk\theta_k and prior p(θkMk)p(\theta_k|M_k), model evidence (marginal likelihood) is:

p(DMk)=p(Dθk,Mk)p(θkMk)dθkp(D|M_k) = \int p(D|\theta_k, M_k) p(\theta_k|M_k) d\theta_k

This is approximated by the harmonic-mean estimator:

p(DMk)(1Ni=1N1p(Dθk(i),Mk))1p(D|M_k) \approx \left( \frac{1}{N} \sum_{i=1}^N \frac{1}{p(D|\theta_k^{(i)}, M_k)} \right)^{-1}

Bayes factors for model discrimination are:

BAB=p(DMA)p(DMB)B_{AB} = \frac{p(D|M_A)}{p(D|M_B)}

Posterior model probabilities:

p(MkD)=p(DMk)p(Mk)j=1Kp(DMj)p(Mj)p(M_k|D) = \frac{p(D|M_k)p(M_k)}{\sum_{j=1}^K p(D|M_j)p(M_j)}

This formalism allows robust selection or weighting of models based on information present in the data (Honarmandi et al., 2018).

4. Bayesian Model Averaging and Predictive Mixing

With posterior model weights p(MkD)p(M_k|D), prediction for any quantity of interest Δ\Delta is averaged:

p(ΔD)=k=1Kp(ΔD,Mk)p(MkD)p(\Delta|D) = \sum_{k=1}^K p(\Delta|D,M_k) p(M_k|D)

Posterior predictive intervals thus become a weighted mixture of the KK model-specific predictions, delivering UQ that reflects both parameter and model-form uncertainty.

5. Error-Correlation-Based Model Fusion

Where model errors are not independent, fused predictions are preferable to simple averaging. At fixed input xx, each model ii yields N(μi,σi2)N(\mu_i,\sigma_i^2); error correlations are summarized in Σ\Sigma with off-diagonal elements Σij=ρijσiσj\Sigma_{ij} = \rho_{ij}\sigma_i\sigma_j. Fused mean and variance are:

μf=eTΣ1μeTΣ1e,σf2=1eTΣ1e\mu_f = \frac{e^T \Sigma^{-1} \mu}{e^T \Sigma^{-1} e}, \quad \sigma_f^2 = \frac{1}{e^T \Sigma^{-1} e}

Pairwise ρij\rho_{ij} can be estimated via reification and weighted averaging:

ρij(i)=σi(μiμj)2+σi2,ρij(j)=σj(μiμj)2+σj2,ρij=σj2ρij(i)+σi2ρij(j)σi2+σj2\rho_{ij}^{(i)} = \frac{\sigma_i}{\sqrt{(\mu_i - \mu_j)^2 + \sigma_i^2}}, \quad \rho_{ij}^{(j)} = \frac{\sigma_j}{\sqrt{(\mu_i - \mu_j)^2 + \sigma_j^2}}, \quad \overline{\rho}_{ij} = \frac{\sigma_j^2\rho_{ij}^{(i)} + \sigma_i^2\rho_{ij}^{(j)}}{\sigma_i^2 + \sigma_j^2}

Error-correlation fusion sharpens prediction by accounting for shared uncertainties (Honarmandi et al., 2018).

6. Unified UQ Workflow

A standard Bayesian UQ workflow—applicable to any parameterized model with Gaussian residuals—proceeds with the following steps:

  1. Compute deterministic best-fit and parameter ranges via optimization.
  2. Define parameter priors for each model around best-fit.
  3. Specify likelihood with Gaussian (optionally hierarchical) errors.
  4. For each candidate model: a. Sample the posterior via Metropolis–Hastings. b. Propagate parameter uncertainty to the predictive distribution. c. Estimate marginal likelihood/evidence via harmonic mean.
  5. Calculate Bayes factors and model weights.
  6. Perform Bayesian model averaging.
  7. If models are correlated, estimate error correlations and perform fused prediction.

This generic scheme supports rigorous, model-agnostic uncertainty quantification in forward modeling scenarios (Honarmandi et al., 2018).

7. Generalization and Applicability

The full suite of Bayesian UQ, model selection, and fusion formulas is applicable wherever model parameters are uncertain, measurement noise is approximately Gaussian, and multiple plausible model structures exist, including possibly correlated model errors. This framework provides detailed traceable uncertainty propagation and robust model combination across diverse domains including phase diagram assessment, scientific computing, and predictive design.

References:

Honarmandi et al., "Bayesian Uncertainty Quantification and Information Fusion in CALPHAD-based Thermodynamic Modeling" (Honarmandi et al., 2018).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Bayesian Uncertainty Quantification Framework.