Papers
Topics
Authors
Recent
Search
2000 character limit reached

Time-Dependent Inverse UQ

Updated 1 March 2026
  • Time-dependent inverse UQ is a framework that infers dynamic system parameters using Bayesian calibration on time-resolved experimental data.
  • It employs dimensionality reduction techniques like functional PCA and phase–amplitude separation to succinctly represent high-dimensional time series.
  • Surrogate models such as Gaussian processes and deep neural networks are integrated to map latent representations to parameters while quantifying model uncertainty.

Time-dependent inverse uncertainty quantification (UQ) encompasses the mathematical, algorithmic, and computational methodologies developed to infer posterior distributions for parameters driving dynamic systems, where both the responses and the measurements are functions of time. The central objective is to recover quantitative measures of parametric uncertainty and model inadequacy given time-resolved experimental observations, leveraging statistical calibration, surrogate models, and advanced sampling strategies. This domain spans a wide range of engineered and physical systems, including nuclear thermal hydraulics, structural dynamics, electrochemical transport, and time-correlated neutron/gamma measurements.

1. Mathematical Framework for Time-Dependent Inverse UQ

Time-dependent inverse UQ is typically formulated as a hierarchical Bayesian inference problem. The forward model is a time-dependent dynamical system, described by a parameterized PDE or ODE: ty(t;x)=F(y(t;x),x),y(0;x)=y0(x),xRp,t[t0,te],\frac{\partial}{\partial t} y(t; x) = \mathcal{F}(y(t; x), x), \quad y(0; x) = y_0(x), \quad x \in \mathbb{R}^p, \quad t \in [t_0, t_e], where xx are uncertain parameters, and y(t;x)y(t; x) is the response trajectory. Observational data yobs(tj)y_{\text{obs}}(t_j) at a sequence of time points tjt_j is modeled as

yobs(tj)=y(tj;x)+εj,εjN(0,Σobs).y_{\text{obs}}(t_j) = y(t_j; x) + \varepsilon_j, \quad \varepsilon_j \sim \mathcal{N}(0, \Sigma_{\text{obs}}).

The Bayesian solution seeks the posterior p(x,Σobsyobs)p(x, \Sigma_{\text{obs}} \mid y_{\text{obs}}), which is intractable for complex, high-dimensional, and nonlinear systems. Dimensionality reduction of y(t;x)y(t; x), construction of high-fidelity statistical surrogate models, and efficient MCMC or variational inference algorithms are fundamental components of current workflows (Song et al., 19 Mar 2025, Xie et al., 2023, Wang, 2024).

2. Dimension Reduction and Functional Representation

Direct inference on time series poses challenges due to the high temporal dimension and strong temporal correlations. Contemporary methodologies employ functional principal component analysis (fPCA), basis expansion with roughness penalties, phase–amplitude separation, and projection onto low-dimensional latent spaces. For example, the Kriging modeling framework based on functional dimension reduction (KFDR) represents each centered trajectory as

yic(t)n(t)ci,y_i^c(t) \approx n(t)^\top c_i,

with n(t)n(t) a vector of smooth basis functions (e.g., B-splines), and the coefficients cic_i determined by minimizing a penalized least squares criterion incorporating a roughness term. The empirical covariance operator is estimated from the coefficient ensemble, and leading functional eigenfunctions ϕk(t)\phi_k(t) are extracted by solving the functional eigenequation. Individual responses are thus summarized by principal scores: αk(xi)=yic(t)ϕk(t)dt=bkci.\alpha_k(x_i) = \int y_i^c(t) \phi_k(t) dt = b_k^\top c_i. A small number mNtm \ll N_t of modes typically captures >99% of variance (Song et al., 19 Mar 2025, Xie et al., 2023).

Phase–amplitude separation, leveraging functional alignment via square-root-slope functions and warping, further enhances reduction efficiency, especially when transients display shifting landmarks. This yields concise amplitude and phase PC scores, significantly reducing the surrogate modeling and inversion burden (Xie et al., 2023).

3. Surrogate Modeling Strategies

Once response trajectories are projected onto a latent space, surrogates are built to map parameters to latent representations. Common choices include:

Surrogate errors are explicitly propagated into the likelihood for Bayesian inference, either by inflating the data noise covariance or by integrating over surrogate prediction uncertainty (Lartaud et al., 2024, Xie et al., 2023).

4. Hierarchical Bayesian and Inference Algorithms

Modern approaches utilize hierarchical Bayesian models to enable information sharing across multiple experimental realizations. Parameters θi\theta_i for each transient are modeled as draws from a hyper-distribution parameterized by population mean and covariance (μ,Σθ)(\mu, \Sigma_\theta). The joint posterior

p(μ,Σθ,{θi}{yi})p(μ)p(Σθ)iNd(θi;μ,Σθ)Np(yi;η(xi,θi),Σobs,i)p(\mu, \Sigma_\theta, \{\theta_i\} \mid \{y_i\}) \propto p(\mu) p(\Sigma_\theta) \prod_i \mathcal{N}_d(\theta_i; \mu, \Sigma_\theta) \mathcal{N}_p(y_i; \eta(x_i, \theta_i), \Sigma_{\text{obs},i})

serves as the target for sampling schemes such as Hamiltonian Monte Carlo with No-U-Turn Sampler (NUTS). This strategy combats overfitting, enables robust hyperparameter estimation, and ensures credible intervals are neither overly optimistic nor under-dispersed (Wang, 2024).

Efficient posterior sampling combines auto-differentiation and surrogate models to provide scalable inference even in high-dimensional latent spaces. Adaptive Metropolis-Hastings and affine-invariant ensemble samplers are also applied for direct sampling from posterior densities (Song et al., 19 Mar 2025, Xie et al., 2023).

5. Advanced Features: Surrogate Error, Data Fusion, and Active Learning

Explicit quantification of surrogate error is essential for reliable uncertainty intervals. Bayesian treatments sum the GP posterior prediction variance with experimental noise for an effective likelihood covariance: Σeff(θ)=Σnoise+Cn(θ),\Sigma_{\text{eff}}(\theta) = \Sigma_{\text{noise}} + C_n(\theta), yielding

p(yθ)N(mn(θ),Σeff(θ)).p(y \mid \theta) \approx \mathcal{N}(m_n(\theta), \Sigma_{\text{eff}}(\theta)).

This construction is necessary to avoid overconfident posterior distributions in regions where the surrogate is poorly trained (Lartaud et al., 2024).

Fusion of multiple time-dependent data streams (e.g., neutron and gamma time correlations) is naturally handled in this framework by constructing block-structured covariance matrices and performing joint inference, which can lead to sharper posteriors than sequential analysis (Lartaud et al., 2024).

Adaptive acquisition—targeted experimental design or active learning—focuses model runs in high-posterior-mass regions, optimizing posterior contraction per simulation cost. Posterior-weighted variance reduction and mutual information-based acquisition criteria have been successfully applied (Lartaud et al., 2024).

6. Benchmarking, Validation, and Performance Metrics

Benchmark problems from nonlinear oscillators, hysteretic dynamics, finite element structural models, and neutron/gamma time-correlation analyses reveal consistent trends:

  • KFDR outperforms PCA-, ICA-, and autoencoder-based surrogates on normalized RMSE and credible interval coverage for small sample regimes and high noise (Song et al., 19 Mar 2025).
  • Functional PCA with phase–amplitude alignment achieves low-dimensional representations yielding more accurate inverse UQ and aligns posterior propagation with experimental validation, reducing the number of retained modes from ~10 (raw PCA) to ~6 while capturing >99% variance (Xie et al., 2023).
  • Hierarchical Bayesian models decrease overfitting risk and tighten validation errors, reducing mean absolute error (MAE) on held-out transients by approximately 20% over single-level approaches (Wang, 2024).
  • Time-dependent uncertainty evolution: Methods that quantify uncertainty as the posterior variance from inversion (e.g., VAMP/EP) yield robust measures of time-resolved model reliability and provide mechanisms for defining certified prediction horizons (Akrout et al., 23 Feb 2026).

7. Implementation, Applications, and Limitations

Time-dependent inverse UQ leverages advanced techniques in basis construction, regularization (e.g., roughness penalty/ridge regression, GCV), PCA, and surrogate modeling to handle highly structured time-series data. Critical practical aspects include:

  • Surrogate construction is computationally dominant, but cost is drastically reduced by projection (mNtm \ll N_t) (Song et al., 19 Mar 2025, Wang, 2024).
  • NNs and BNNs, with auto-differentiation, enable tractable inference for up to hundreds of time points and parameters, suitable for contemporary experimental datasets (Xie et al., 2023).
  • Consistency regularizers and curriculum learning are pivotal for robust multi-step surrogate rollouts (Wu et al., 2024).
  • Empirical validation of credible intervals is necessary, as model–experiment discrepancy, surrogate inaccuracy, and data noise interact in nontrivial ways.

Applications extend across thermal hydraulics (TRACE/PSBT), structural analysis, neutron noise diagnostics, electrochemical characterization, and data-driven model identification.

A notable limitation remains: the quality of UQ is fundamentally constrained by surrogate accuracy and coverage. Incomplete coverage or unmodeled discrepancies can result in under-calibrated credible intervals. Active learning and joint data fusion partially mitigate this, but comprehensive validation against experimental observables remains imperative.


Key references:

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Time-Dependent Inverse Uncertainty Quantification.