Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 79 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 187 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Guided Multi-Fidelity Bayesian Optimization

Updated 29 September 2025
  • Guided multi-fidelity Bayesian optimization is an advanced probabilistic framework that fuses high-cost real experiments with low-cost simulations using a multi-fidelity Gaussian process surrogate.
  • It employs a dynamic correction model and an adaptive cost-aware acquisition function to balance experimental cost against simulation accuracy in closed-loop controller tuning.
  • Empirical results demonstrate significant reductions in real experiments and robust adaptation to evolving simulation biases in robotics and similar applications.

Guided multi-fidelity Bayesian optimization (GMFBO) is an advanced probabilistic optimization framework that strategically integrates information from multiple, hierarchically related information sources—such as real-world experiments (high fidelity) and simulations or digital twins (low fidelity)—into the controller tuning workflow for closed-loop dynamical systems. This methodology aims to maximize data efficiency in scenarios where real-world evaluations are expensive or limited, by adaptively leveraging both corrected and raw simulation outputs under a cost-sensitive acquisition policy and dynamically adjusting to evolving model inaccuracies.

1. Framework Overview

The GMFBO framework is designed for controller tuning in closed-loop systems where two or more sources of information exist: a high-fidelity source (IS1, typically the real system) and lower-fidelity sources (IS2, such as digital twins or reduced-order models), with the possible inclusion of corrected simulation data (IS3). Central to the method is a multi-fidelity Gaussian process (GP) surrogate that fuses all available data via a custom kernel encoding both parameter similarities and fidelity-dependent relationships.

The iterative optimization proceeds by evaluating candidate controller settings using either IS1, IS2, or IS3, then updating the surrogate GP and the correction model at each iteration. Decisions about which information source to query are governed by a composite acquisition function, ensuring a balanced allocation of experimental and simulation effort according to realtime estimates of cross-source correlations and cost.

2. Digital Twin Correction Model

To address model mismatch between digital twins and real plant dynamics, GMFBO introduces a dedicated correction model based on a GP (GP_c). For a controller setting kk, simulation IS2 provides predictions y(k,t,s)y(k, t, s') over a time horizon tt, while IS1 delivers true system responses y(k,t,s=1)y(k, t, s=1). Paired data ([ ⁣[t,y(k,t,s)] ⁣],y(k,t,s=1))([\![t, y(k,t,s')]\!], y(k, t, s=1)) is used to train GP_c. The correction model outputs an adjusted estimate:

y^(k,t,s=1)=μc([t,y(k,t,s)])\hat{y}(k,t,s=1) = \mu_c([t, y(k, t, s')])

The model further computes a fidelity estimate via the average predictive uncertainty:

σˉc=1Tt=0Tσc2([t,y(k,t,s)])\bar{\sigma}_c = \sqrt{\frac{1}{T} \sum_{t=0}^{T} \sigma_c^2([t, y(k,t,s')])}

A threshold α\alpha (relative to a normalized reference signal) determines automatic acceptance of corrected simulation data (IS3) into the GP surrogate's dataset. When σˉcα\bar{\sigma}_c \ll \alpha, corrected predictions are trusted and added; otherwise, they are discarded to prevent pollution of the optimization trajectory with unreliable surrogate information.

3. Adaptive Cost-Aware Acquisition Function

Candidate input selection in GMFBO is performed by maximizing a cost-aware Expected Improvement (caEI) acquisition function: caEIn(ze^IS2)=aEI,n(z)H(se^IS2)\mathrm{caEI}_n(z \mid \hat{e}_{\mathrm{IS2}}) = \frac{a_{\mathrm{EI},n}(z)}{\mathcal{H}(s \mid \hat{e}_{\mathrm{IS2}})} where z=[k,s]z = [k, s] combines controller parameters and fidelity indicator. The numerator is the standard GP EI acquisition: aEI,n(z)=σn(z)(υn(z)Φ(υn(z))+φ(υn(z)))a_{\mathrm{EI},n}(z) = \sigma_n(z) \left( \upsilon_n(z) \Phi(\upsilon_n(z)) + \varphi(\upsilon_n(z)) \right) with υn(z)=μn(z)g+σn(z)\upsilon_n(z) = \frac{\mu_n(z) - \mathsf{g}^+}{\sigma_n(z)} and g+\mathsf{g}^+ the best observed objective.

The denominator H(se^IS2)\mathcal{H}(s \mid \hat{e}_{\mathrm{IS2}}) codifies fidelity-aware cost: H(se^IS2)={1if s=1 P[Hmin,Hmax](βe^IS2)if s1\mathcal{H}(s \mid \hat{e}_{\mathrm{IS2}}) = \begin{cases} 1 & \text{if } s = 1 \ \mathcal{P}_{[\mathcal{H}_{\min}, \mathcal{H}_{\max}]}(\beta \cdot \hat{e}_{\mathrm{IS2}}) & \text{if } s \neq 1 \end{cases} where e^IS2\hat{e}_{\mathrm{IS2}} measures real-vs-simulation mismatch, β\beta is a scaling constant, and P\mathcal{P} denotes clamping to a cost range. This normalization biases sampling towards low-fidelity (simulation-derived) actions when their expected improvement is high and model mismatch is small; as simulation accuracy worsens, more weight is placed on costly real experiments.

4. Dynamic Online Adaptation

A defining feature of GMFBO is its closed-loop adaptation to time-varying system behavior and digital twin fidelity. Every new real system observation prompts:

  • Retraining of the correction GP on updated real-vs-digital twin data.
  • Adjustment of the cross-source correlation kernel hyperparameters.

The GP surrogate uses a composite kernel

c(z1,z2)=γ0(s1,s2,e^IS2)c0(k1,k2)+γ1(s1,s2)c1(k1,k2)c(z_1, z_2) = \gamma_0(s_1, s_2, \hat{e}_{\mathrm{IS2}}) \, c_0(k_1, k_2) + \gamma_1(s_1, s_2) \, c_1(k_1, k_2)

where γ0\gamma_0 is a correlation decay factor parameterized by the current digital twin error, and c0c_0, c1c_1 model parameter-space dependence. The key kernel lengthscale lγ0l_{\gamma_0} is dynamically set via

lγ0(e^IS2)=P[lγ0min,lγ0max](se^IS2)l_{\gamma_0}(\hat{e}_{\mathrm{IS2}}) = \mathcal{P}_{[l_{\gamma_{0_\text{min}}}, l_{\gamma_{0_\text{max}}}]} \left( \frac{s'}{\hat{e}_{\mathrm{IS2}}}\right)

increasing real system prioritization as digital twin reliability deteriorates.

Additionally, when a real-system trial is performed, auxiliary corrected simulation data (IS3) are generated in a neighborhood of the tested controller—expanding the evidence available to the GP with low overhead.

5. Empirical Performance and Results

GMFBO was experimentally validated in both simulation studies and hardware implementation on a Maxon HEJ 90 robotic drive:

  • In controlled simulation, ground-truth optimization of controller gains (subject to a cost function) was used to compare standard BO (single-EI) and GMFBO. GMFBO required as few as 6 real experiments to converge, versus 22 for naive BO under substantial digital twin bias.
  • In robotic hardware, GMFBO achieved improved data efficiency, with the acquisition function and correction model dynamically responding to evolving mismatch (e.g., due to frictional changes in the physical system). The method robustly prioritized simulation data when accurate and reverted to real-world evaluation upon DT degradation.
  • Competing multi-fidelity BO methods lacking adaptive correction were inferior, especially under significant or time-varying simulation bias.

6. Mathematical Formulation and Algorithmic Summary

Critical formulation details:

  • GP surrogate predictive mean and variance for candidate input zz^*:

μ(z)=m(z)+c(z)(C+Σ)1(g(z)m(z)),σ2(z)=c(z,z)c(z)(C+Σ)1c(z)\mu(z^*) = m(z^*) + c(z^*)^\top (C + \Sigma)^{-1} (g(z) - m(z)), \quad \sigma^2(z^*) = c(z^*, z^*) - c(z^*)^\top (C+\Sigma)^{-1}c(z^*)

  • Correction GP acceptance rule: accept y^(k,t,s=1)\hat{y}(k, t, s=1) into the surrogate if σˉcα\bar{\sigma}_c \ll \alpha; otherwise, discard.

Overall, the optimization proceeds by:

  1. At each round, evaluating the caEI acquisition function and selecting z=[k,s]z = [k, s] that maximizes it.
  2. Querying IS1 (real), IS2 (simulation), or IS3 (corrected simulation) as indicated.
  3. After each IS1 query, retraining the correction GP, updating GP surrogate and cross-source kernel parameters, and augmenting the training set with nearby corrected samples for local refinement.

7. Impact and Significance

GMFBO as articulated provides an advanced, cost-adaptive mechanism for closed-loop optimization of controllers by blending corrected digital twin information with real measurements in heterogeneous, dynamic environments. By online adaptation of model trust and cross-source kernels, the method maintains robustness to simulation drift, achieves strong convergence with limited high-cost experimentation, and substantially improves data efficiency in real-world robotics and related applications (Nobar et al., 22 Sep 2025). This principled approach generalizes to broader engineering domains characterized by the coexistence of digital twins, real systems, and varying simulation fidelity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Guided Multi-Fidelity Bayesian Optimization.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube