Papers
Topics
Authors
Recent
2000 character limit reached

Surrogate-Informed Adaptive Refinement

Updated 9 February 2026
  • The paper introduces a novel coupling of surrogate models with iterative error-based refinement to reduce computational expense in complex simulations.
  • It details various surrogate types and diagnostic metrics, such as residuals and predictive uncertainties, to drive targeted model corrections.
  • The methodology demonstrates improved efficiency in Bayesian inversion, uncertainty quantification, and optimization through adaptive sample allocation.

Surrogate-Informed Adaptive Refinement

Surrogate-informed adaptive refinement is a set of methodologies that couple surrogate modeling with explicit, data-driven refinement strategies to minimize computational cost while achieving high accuracy in complex simulation, inference, or optimization settings. The central premise is to iteratively adapt surrogate models—regression, interpolation, neural, or probabilistic emulators—by focusing model evaluations, corrective updates, or technical resources in regions where the error is large, the predicted impact on quantities of interest is high, or where downstream tasks (e.g., Bayesian inference, optimization, experimental design) most benefit from improved surrogate fidelity. This concept is critical in large-scale inverse problems, stochastic simulation, design, uncertainty quantification, and adaptive experimentation, as demonstrated in a diverse body of recent literature (Yan et al., 2019, Wang et al., 2024, Takhtaganov et al., 2018, Scarabosio et al., 2018, Meles et al., 6 May 2025, Zeng et al., 2022, Zhang et al., 2018, Cangelosi et al., 4 Sep 2025, Zhu et al., 2024, Shu et al., 2023, Ghassemi et al., 2019, Vohra et al., 2018, Galetzka et al., 2022, Mattis et al., 2018, Fan et al., 7 Dec 2025, Zhang et al., 2024, Chen et al., 26 Dec 2025, Wu et al., 2021).

1. Fundamental Principles and General Frameworks

Surrogate-informed adaptive refinement is grounded in the following key principles:

  • Surrogate construction: An initial (global or local) surrogate model approximates an expensive forward map, solution operator, or response surface based on a set of model outputs from selected input configurations.
  • Error estimation and refinement criterion: The surrogate’s local or global error is explicitly monitored using residuals, uncertainty estimates, adjoint-based indicators, or task-specific impact metrics.
  • Adaptive update mechanism: The surrogate model is iteratively refined by targeting additional, expensive model evaluations (or surrogate corrections) where indicated by the error estimator, with new information incorporated through model retraining, local correction, or hierarchical enrichment.
  • Task-coupling: Refinement is closely tied to the ultimate objective—e.g., accurate posterior characterization in Bayesian inversion, reduction of solver variance in uncertainty quantification, minimal prediction error in dynamical simulation, or welfare improvement in adaptive experimental designs.

This structure enables a closed, implementable workflow that adaptively allocates computational effort where it yields maximal gain for the intended quantity of interest or inference accuracy (Yan et al., 2019, Takhtaganov et al., 2018, Meles et al., 6 May 2025).

2. Surrogate Models and Error Indicators

Surrogate-informed refinement requires expressive surrogate models and robust, diagnostic error measures.

3. Algorithmic Patterns and Implementation Strategies

Surrogate-informed adaptive refinement is operationalized in several algorithmic forms, often hybridizing techniques to maximize efficiency and scalability:

  • Two-stage surrogate correction: An initial surrogate, trained offline, is incrementally corrected in regions of high surrogate error, often via local models or shallow neural networks that take the prior surrogate as input (Yan et al., 2019).
  • Active learning via acquisition maximization: Iteratively add new sample points by maximizing a task-coupled acquisition function (e.g., EI, variance, impact on QoI), retrain the surrogate, and repeat until convergence (Takhtaganov et al., 2018, Ghassemi et al., 2019, Galetzka et al., 2022, Zhang et al., 2018).
  • Error-driven mesh or domain adaptation: In simulation, build and refine basis/domain discretizations where physically- or PDE-informed surrogates indicate highest residual or uncertainty (e.g. PINN-induced AMR, PDE-residual-based mesh refinement) (Zhu et al., 2024, Halder et al., 2019).
  • Local adaptation in optimization: Penalized batch acquisition (e.g., q-EI with repulsion) for surrogate-driven optimization to avoid redundant sampling and improve parallel efficiency (Ghassemi et al., 2019).
  • Sensitivity-driven adaptation: Use sensitivity or DGSM measures to screen and reduce dimensions to those that matter most for the QoI, then focus surrogate refinements there (Vohra et al., 2018, Cangelosi et al., 4 Sep 2025).
  • Multi-fidelity and hierarchical refinement: Sequence of surrogate levels (coarse to fine) with adaptively selected fidelity or complexity, often guided by block-wise or domain-localized indicators (Scarabosio et al., 2018, Galetzka et al., 2022, Mattis et al., 2018).

4. Applications in Inference, Simulation, and Optimization

Surrogate-informed adaptive refinement has enabled significant computational savings and accuracy in a diverse range of tasks:

Numerical evidence includes (a) $2$–$3$ orders of magnitude reduction in required forward solver calls for Bayesian inversion (Yan et al., 2019, Meles et al., 6 May 2025); (b) exponential convergence of surrogate error and statistical estimators in parametric PDEs and uncertainty quantification (Halder et al., 2019, Galetzka et al., 2022); (c) high-fidelity inference and credible intervals captured even in multimodal, high-dimensional posteriors with adaptive correction (Zhang et al., 2018); and (d) task-based mesh adaptivity yielding up to 8×8\times cell count reduction in computational physics flows (Zhu et al., 2024).

5. Theoretical Guarantees and Performance Bounds

A number of rigorous theoretical results underpin surrogate-informed adaptive refinement workflows:

  • Posterior consistency: Under suitable convergence of the surrogate (in e.g., L2L^2 norm under the prior), the induced surrogate posterior converges to the true Bayesian posterior in Hellinger metric (Yan et al., 2019).
  • Error bounds on surrogates and outputs: Adaptive workflows often provide guarantees that surrogate error in the QoI (or expectation with respect to the posterior) can be bounded by the cumulative local error indicators or integrated residual measures (Halder et al., 2019, Scarabosio et al., 2018, Cangelosi et al., 4 Sep 2025).
  • Acquisition gain: Sampling at points where indicator functions (e.g., residual, variance, expected improvement) are maximized provably leads to monotonic reduction of loss terms or uncertainty in next-stage models (Wang et al., 2024, Takhtaganov et al., 2018, Cangelosi et al., 4 Sep 2025).
  • Optimal sample allocation rules: Multi-level, batch, or exploratory sampling rules are often supported by statistical theory relating to optimal sample allocation for variance reduction, e.g., in A-optimal experimental design, MLMC, or theoretical regret bounds in online adaptive experiments (Chen et al., 26 Dec 2025, Galetzka et al., 2022, Fan et al., 7 Dec 2025).
  • Dimensionality-type reduction guarantees: Sensitivity-driven and screening-guided adaptation ensure that low-dimensional surrogates are statistically justified and maintain predictive accuracy for QoIs in high-dimensional input regimes (Vohra et al., 2018, Cangelosi et al., 4 Sep 2025).

6. Extensions and Recent Directions

Ongoing and recent work broadens the domain and technical approach of surrogate-informed adaptive refinement:

  • Deep surrogate architectures: Offline/online DNN surrogates and PINN-residual error-guided refinement, as well as residual-driven deep adaptive sampling for high-dimensional PDE systems (Yan et al., 2019, Wang et al., 2024, Zhu et al., 2024).
  • Surrogacy in data-scarce settings: SPI and surrogate-powered experimental design leverage limited validated outcomes and abundant proxy/surrogate data, with multiwave adaptive labeling and regularization to prioritize informative samples (Chen et al., 26 Dec 2025, Fan et al., 7 Dec 2025).
  • Causal inference and experimental design: Surrogates for rapid outcome measurement enable online allocation rules (covariate- and response-adaptive, Neyman-optimal) that expedite detection of treatment heterogeneity and optimize participant welfare, with formal regret and confidence-sequence guarantees (Fan et al., 7 Dec 2025, Zhang et al., 2024, Wu et al., 2021).
  • Hierarchical, multi-fidelity, and non-intrusive frameworks: Integration of hp-adaptivity, multi-element surrogates, and residual-based adaptive collocation extends the approach to non-smooth, multi-scale, and black-box scientific models (Galetzka et al., 2022, Halder et al., 2019).

A key trend is the increasing sophistication of multi-level and multi-modal surrogacy, the use of theoretical acquisition and error control functions tied directly to problem objectives, and the explicit incorporation of uncertainty quantification and decision-theoretic criteria within the refinement loop.


References (arXiv IDs):

(Yan et al., 2019, Wang et al., 2024, Takhtaganov et al., 2018, Scarabosio et al., 2018, Meles et al., 6 May 2025, Zeng et al., 2022, Zhang et al., 2018, Cangelosi et al., 4 Sep 2025, Zhu et al., 2024, Shu et al., 2023, Ghassemi et al., 2019, Vohra et al., 2018, Galetzka et al., 2022, Mattis et al., 2018, Fan et al., 7 Dec 2025, Zhang et al., 2024, Chen et al., 26 Dec 2025, Wu et al., 2021)

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Surrogate-Informed Adaptive Refinement.