Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 226 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Bi-Fidelity Interpolative Decomposition

Updated 17 September 2025
  • Bi-Fidelity Interpolative Decomposition is a non-intrusive, sample-based reduction technique that approximates high-fidelity outputs using inexpensive low-fidelity simulations with carefully selected high-fidelity samples.
  • The method employs a greedy, pivoted QR strategy to extract a low-rank representation, enabling quick construction of surrogates for uncertainty quantification and design space exploration.
  • Rigorous error analysis and convergence studies ensure that the surrogate maintains near high-fidelity accuracy while significantly reducing computational cost.

Bi-fidelity interpolative decomposition (ID) is a non-intrusive, sample-based model reduction technique for accurately approximating the outputs of high-fidelity models using many inexpensive low-fidelity (LF) simulations and a small set of high-fidelity (HF) runs. This decomposition exploits the empirical low-rank structure often observed in parametric or stochastic numerical simulations, enabling construction of surrogates for uncertainty quantification, design space exploration, and related many-query tasks. The resulting bi-fidelity model is characterized by an accuracy close to that of the HF model and a computational cost comparable to the LF model, assuming the rank structure is sufficiently sharp (Hampton et al., 2017).

1. Bi-Fidelity Interpolative Decomposition: Formulation and Construction

The central assumption of the bi-fidelity ID approach is that the mapping from parameters μ\mu (or uncertain inputs) to the quantity of interest (QoI), denoted v(μ)v(\mu), is well-approximated by a low-rank linear combination of simulation outputs at a small set of parameter configurations. Specifically, the representation reads

v(μ)=1rv(μ())c(μ)v(\mu) \approx \sum_{\ell=1}^r v(\mu^{(\ell)})\, c_{\ell}(\mu)

where the basis vectors v(μ())v(\mu^{(\ell)}) are simulations at carefully chosen parameters. The LF data is aggregated into a matrix LL, and a rank-rr interpolative decomposition is performed: LL(r)CLL \approx L(r)\, C_L Here, L(r)L(r) contains the rr selected skeleton columns, and CLC_L is the coefficient (interpolation) matrix. The selection of the basis indices is performed by a greedy, pivoted QR strategy, maximizing the representativeness of L(r)L(r).

To "lift" this construction to the HF space, L(r)L(r) is replaced by H(r)H(r)—the high-fidelity solutions at the same parameter values—to yield the bi-fidelity approximation: H^H(r)CL\hat{H} \equiv H(r)\, C_L This means that only rr expensive HF runs are needed (at the selected parameters), while the interpolation rule derived from the cheap LF model applies throughout the domain.

2. Low-Rank Structure and Basis Selection

The bi-fidelity framework is predicated on the existence of rapid decay in the singular values of the LF data matrix LL, indicating approximate low-rankness. This property ensures:

  • Compact representation: The majority of the variability in LL (and, crucially, in HH) is captured by the selected basis.
  • Efficiency: The cost of identifying the skeleton columns and building CLC_L via QR with column pivoting is O(mnr)O(m n r) for LRm×nL \in \mathbb{R}^{m \times n} and typically rmin(m,n)r \ll \min(m, n).
  • Transferability: Once the skeleton indices are determined from LL, only rr HF runs (columns of HH) must be computed, conferring significant computational savings.

The effectiveness of this decomposition hinges on the low-rank similarity between LF and HF data; if the LF basis fails to approximate the HF solution subspace, the bi-fidelity approach is suboptimal.

3. Rigorous Error Analysis

A key result is the derivation of a computable, pragmatic error bound for the bi-fidelity surrogate. The approach models the HF data as H=TL+EH = T L + E, with TT a lifting operator and EE an error term. Introducing the Gramian mismatch,

ε(τ)λmax(HTHτLTL)\varepsilon(\tau) \equiv \lambda_{\max}(H^T H - \tau L^T L)

for τ0\tau \geq 0, the main bound is: HH^mink,τρk(τ)\| H - \hat{H} \| \leq \min_{k,\, \tau} \rho_k(\tau) where

ρk(τ)=(1+CL)τσk+12+ε(τ)+LL~τ+ε(τ)σk2\rho_k(\tau) = \left(1+\|C_L\|\right) \sqrt{\tau\, \sigma_{k+1}^2 + \varepsilon(\tau)} + \|L - \tilde{L}\|\, \sqrt{\tau + \frac{\varepsilon(\tau)}{\sigma_k^2}}

and σk\sigma_k denotes the kkth singular value of LL. The first term in the bound accounts for approximation error in the LF low-rank model, while the second depends on the energy mismatch between HH and LL. The computational cost of this error estimation is low, requiring Gramian approximations at only slightly more than rr HF points.

The theoretical analysis rigorously quantifies the tradeoff between the quality of the LF basis and the representational error in the HF space; the error decays provided the basis is sufficiently robust and the models are closely coupled. If the LF mesh is refined or the LF model is improved to better emulate the HF system, both ε(τ)\varepsilon(\tau) and the bi-fidelity error decrease, with numerical evidence showing τ1\tau \to 1 as optimal with improved LF models.

4. Convergence and Robustness Properties

The convergence properties of the bi-fidelity ID are governed by:

  • The decay rate of the singular values σk\sigma_k of LL: Fast decay yields rapid convergence.
  • The lifting operator TT: Its norm, along with the Gramian proximity parameter ε(τ)\varepsilon(\tau), controls the bound tightness.
  • The number of HF samples: Empirically, a number slightly exceeding the interpolation rank rr is sufficient for robust estimation of error bounds and constructing the HF surrogate.
  • Model hierarchy: As the LF model more accurately resolves the physics, the optimal parameter τ\tau approaches unity, and the Gramian mismatch becomes negligible.

In all practical settings considered, only a small set of representative HF samples is needed for surrogate construction and error certification, indicating scalability as the problem dimension grows or the parameter space expands.

5. Applications in Uncertainty Quantification and Design

Two principal numerical examples demonstrate the efficacy of the method for uncertainty quantification (UQ):

  • Heat-driven cavity flow: The quantity of interest is the steady-state heat flux along the cavity's hot wall, with uncertainties modeled via a Karhunen–Loève expansion and parameterized viscosity. High-fidelity solutions (e.g., 256×256256 \times 256 mesh) are "lifted" from low-fidelity outputs (down to 16×1616 \times 16 meshes). The bi-fidelity surrogate with r=10r=10 basis vectors achieves error levels much lower than standalone LF, closely tracking the HF outputs for both mean and variance estimation, as verified via error histograms.
  • Composite beam: The maximal vertical displacement under uncertain load is predicted using a bi-fidelity model that combines classical beam theory (LF) with a finite-element high-fidelity simulation. Even with only r=1r = 1 basis vector, the bi-fidelity surrogate surpasses LF performance by an order of magnitude in RMSE.

In both cases, effectiveness is demonstrated not just for mean predictions but also for higher moments (variance), with practical error bounds predicting real errors accurately—with empirical robustness to the number and selection of HF samples.

6. Comparison to Classical High- and Low-Fidelity Models

The essence of the bi-fidelity ID is to interpolate the accuracy gap between HF and LF models:

  • HF models are computationally infeasible for many-query applications due to high simulation cost.
  • LF models enable large sampling but at the price of lower accuracy and limited quantitative predictive power.
  • Bi-fidelity surrogates blend the advantages: only a small set of HF runs is needed, and the LF sampling is used everywhere else, inheriting both speed and a quantifiable degree of HF accuracy. The error estimate ensures that the model does not underperform reliably given a sufficiently good LF representation and close proximity in the lifted subspaces.

Quantitatively, the computational savings can be dramatic: error levels an order of magnitude lower than LF-only estimates attained at costs well below those of HF-only approaches, and comparable to the cost of the LF evaluations.

7. Practical Workflow and Implementation Considerations

Implementation proceeds as follows:

  1. Sampling: Run the LF model on a large sample of parameter values; assemble the data matrix LL.
  2. Decomposition: Apply rank-revealing QR to obtain L(r)L(r) and CLC_L.
  3. HF acquisition: Run the HF model at the rr selected parameter values (those corresponding to L(r)L(r)).
  4. Surrogate construction: Compute H^=H(r)CL\hat{H} = H(r) C_L to reconstruct the HF solution for any parameter via the LF-derived interpolation rule.
  5. Error estimation: Use a small set of additional HF runs to estimate Gramian mismatches and certify output accuracy using the derived bounds.

This workflow is strictly non-intrusive: no modifications to existing simulators are required, and the approach generalizes across physics and system types, provided the low-rank hypothesis holds.


Bi-fidelity interpolative decomposition thus constitutes a rigorous, low-cost, and widely applicable approach to surrogate construction, underpinned by provable error certificates and demonstrated effectiveness in uncertainty quantification tasks involving both linear and nonlinear PDE models (Hampton et al., 2017).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Bi-Fidelity Interpolative Decomposition.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube