Time-Dependent Inverse UQ
- Time-dependent inverse UQ is a framework that infers dynamic system parameters using Bayesian calibration on time-resolved experimental data.
- It employs dimensionality reduction techniques like functional PCA and phase–amplitude separation to succinctly represent high-dimensional time series.
- Surrogate models such as Gaussian processes and deep neural networks are integrated to map latent representations to parameters while quantifying model uncertainty.
Time-dependent inverse uncertainty quantification (UQ) encompasses the mathematical, algorithmic, and computational methodologies developed to infer posterior distributions for parameters driving dynamic systems, where both the responses and the measurements are functions of time. The central objective is to recover quantitative measures of parametric uncertainty and model inadequacy given time-resolved experimental observations, leveraging statistical calibration, surrogate models, and advanced sampling strategies. This domain spans a wide range of engineered and physical systems, including nuclear thermal hydraulics, structural dynamics, electrochemical transport, and time-correlated neutron/gamma measurements.
1. Mathematical Framework for Time-Dependent Inverse UQ
Time-dependent inverse UQ is typically formulated as a hierarchical Bayesian inference problem. The forward model is a time-dependent dynamical system, described by a parameterized PDE or ODE: where are uncertain parameters, and is the response trajectory. Observational data at a sequence of time points is modeled as
The Bayesian solution seeks the posterior , which is intractable for complex, high-dimensional, and nonlinear systems. Dimensionality reduction of , construction of high-fidelity statistical surrogate models, and efficient MCMC or variational inference algorithms are fundamental components of current workflows (Song et al., 19 Mar 2025, Xie et al., 2023, Wang, 2024).
2. Dimension Reduction and Functional Representation
Direct inference on time series poses challenges due to the high temporal dimension and strong temporal correlations. Contemporary methodologies employ functional principal component analysis (fPCA), basis expansion with roughness penalties, phase–amplitude separation, and projection onto low-dimensional latent spaces. For example, the Kriging modeling framework based on functional dimension reduction (KFDR) represents each centered trajectory as
with a vector of smooth basis functions (e.g., B-splines), and the coefficients determined by minimizing a penalized least squares criterion incorporating a roughness term. The empirical covariance operator is estimated from the coefficient ensemble, and leading functional eigenfunctions are extracted by solving the functional eigenequation. Individual responses are thus summarized by principal scores: A small number of modes typically captures >99% of variance (Song et al., 19 Mar 2025, Xie et al., 2023).
Phase–amplitude separation, leveraging functional alignment via square-root-slope functions and warping, further enhances reduction efficiency, especially when transients display shifting landmarks. This yields concise amplitude and phase PC scores, significantly reducing the surrogate modeling and inversion burden (Xie et al., 2023).
3. Surrogate Modeling Strategies
Once response trajectories are projected onto a latent space, surrogates are built to map parameters to latent representations. Common choices include:
- Gaussian process regression (GP/Kriging): For each principal component score , an independent GP is trained, with posterior mean and covariance for predictions at new parameter values. Hyperparameters are optimized by maximizing the GP marginal likelihood (Song et al., 19 Mar 2025, Wang, 2024).
- Deep neural networks (DNN): Feed-forward networks or Bayesian NNs parameterize the mapping, with variational Bayesian inference for uncertainty migration to the prediction (Xie et al., 2023, Wu et al., 2024).
- Polynomial chaos expansions (PCE): Orthogonal polynomial basis representations are fit to the reduced latent scores when parametric dependence is smooth (Lartaud et al., 2024).
- Koopman-inspired linear predictors: Lifted nonlinear dynamics are projected into a reduced-order linear representation for efficient sequential prediction and subsequent inversion (Akrout et al., 23 Feb 2026).
Surrogate errors are explicitly propagated into the likelihood for Bayesian inference, either by inflating the data noise covariance or by integrating over surrogate prediction uncertainty (Lartaud et al., 2024, Xie et al., 2023).
4. Hierarchical Bayesian and Inference Algorithms
Modern approaches utilize hierarchical Bayesian models to enable information sharing across multiple experimental realizations. Parameters for each transient are modeled as draws from a hyper-distribution parameterized by population mean and covariance . The joint posterior
serves as the target for sampling schemes such as Hamiltonian Monte Carlo with No-U-Turn Sampler (NUTS). This strategy combats overfitting, enables robust hyperparameter estimation, and ensures credible intervals are neither overly optimistic nor under-dispersed (Wang, 2024).
Efficient posterior sampling combines auto-differentiation and surrogate models to provide scalable inference even in high-dimensional latent spaces. Adaptive Metropolis-Hastings and affine-invariant ensemble samplers are also applied for direct sampling from posterior densities (Song et al., 19 Mar 2025, Xie et al., 2023).
5. Advanced Features: Surrogate Error, Data Fusion, and Active Learning
Explicit quantification of surrogate error is essential for reliable uncertainty intervals. Bayesian treatments sum the GP posterior prediction variance with experimental noise for an effective likelihood covariance: yielding
This construction is necessary to avoid overconfident posterior distributions in regions where the surrogate is poorly trained (Lartaud et al., 2024).
Fusion of multiple time-dependent data streams (e.g., neutron and gamma time correlations) is naturally handled in this framework by constructing block-structured covariance matrices and performing joint inference, which can lead to sharper posteriors than sequential analysis (Lartaud et al., 2024).
Adaptive acquisition—targeted experimental design or active learning—focuses model runs in high-posterior-mass regions, optimizing posterior contraction per simulation cost. Posterior-weighted variance reduction and mutual information-based acquisition criteria have been successfully applied (Lartaud et al., 2024).
6. Benchmarking, Validation, and Performance Metrics
Benchmark problems from nonlinear oscillators, hysteretic dynamics, finite element structural models, and neutron/gamma time-correlation analyses reveal consistent trends:
- KFDR outperforms PCA-, ICA-, and autoencoder-based surrogates on normalized RMSE and credible interval coverage for small sample regimes and high noise (Song et al., 19 Mar 2025).
- Functional PCA with phase–amplitude alignment achieves low-dimensional representations yielding more accurate inverse UQ and aligns posterior propagation with experimental validation, reducing the number of retained modes from ~10 (raw PCA) to ~6 while capturing >99% variance (Xie et al., 2023).
- Hierarchical Bayesian models decrease overfitting risk and tighten validation errors, reducing mean absolute error (MAE) on held-out transients by approximately 20% over single-level approaches (Wang, 2024).
- Time-dependent uncertainty evolution: Methods that quantify uncertainty as the posterior variance from inversion (e.g., VAMP/EP) yield robust measures of time-resolved model reliability and provide mechanisms for defining certified prediction horizons (Akrout et al., 23 Feb 2026).
7. Implementation, Applications, and Limitations
Time-dependent inverse UQ leverages advanced techniques in basis construction, regularization (e.g., roughness penalty/ridge regression, GCV), PCA, and surrogate modeling to handle highly structured time-series data. Critical practical aspects include:
- Surrogate construction is computationally dominant, but cost is drastically reduced by projection () (Song et al., 19 Mar 2025, Wang, 2024).
- NNs and BNNs, with auto-differentiation, enable tractable inference for up to hundreds of time points and parameters, suitable for contemporary experimental datasets (Xie et al., 2023).
- Consistency regularizers and curriculum learning are pivotal for robust multi-step surrogate rollouts (Wu et al., 2024).
- Empirical validation of credible intervals is necessary, as model–experiment discrepancy, surrogate inaccuracy, and data noise interact in nontrivial ways.
Applications extend across thermal hydraulics (TRACE/PSBT), structural analysis, neutron noise diagnostics, electrochemical characterization, and data-driven model identification.
A notable limitation remains: the quality of UQ is fundamentally constrained by surrogate accuracy and coverage. Incomplete coverage or unmodeled discrepancies can result in under-calibrated credible intervals. Active learning and joint data fusion partially mitigate this, but comprehensive validation against experimental observables remains imperative.
Key references:
- KFDR methodology and benchmarking: (Song et al., 19 Mar 2025)
- Hierarchical Bayesian calibration: (Wang, 2024)
- Functional PCA and Bayesian DNNs: (Xie et al., 2023)
- Surrogate-augmented Bayesian UQ in correlation measurement: (Lartaud et al., 2024)
- Latent-space deep surrogate UQ: (Wu et al., 2024)
- Data-driven inverse UQ and Koopman approaches: (Akrout et al., 23 Feb 2026)
- Bayesian PDE parameter inference: (Sethurajan et al., 2018)