Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Survey of multifidelity methods in uncertainty propagation, inference, and optimization (1806.10761v1)

Published 28 Jun 2018 in math.NA, cs.NA, stat.CO, and stat.ME

Abstract: In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.

Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization

This paper authored by Benjamin Peherstorfer, Karen Willcox, and Max Gunzburger performs a comprehensive survey of multifidelity methods in the contexts of uncertainty propagation, statistical inference, and optimization. The authors review various strategies to utilize multifidelity models—models with varying levels of fidelity and computational cost—to accelerate "outer-loop" applications where repeated model evaluations are required. Their contribution categorizes multifidelity methods into adaptation, fusion, and filtering strategies, and explores their application in different computational settings.

Introduction

In computational science and engineering, models often describe systems of interest with varying levels of approximation. For instance, high-fidelity models offer detailed and precise descriptions but at significant computational costs, whereas low-fidelity models provide quicker, albeit less accurate, solutions. These models are pivotal in outer-loop applications like optimization, uncertainty quantification (UQ), and inference, where repeated model evaluations at various inputs can impose prohibitive computational demands if relying solely on high-fidelity models.

The paper is structured into several sections, beginning with an introduction to multifidelity models. Following this, it delineates the three principal strategies for multifidelity methods: adaptation, fusion, and filtering. Subsequently, it explores detailed discussions pertinent to multifidelity approaches within the realms of uncertainty propagation, inference, and optimization.

Multifidelity Methods for Uncertainty Propagation

For uncertainty propagation, multifidelity methods aim to estimate statistics like the expectation or variance of a model output when the input is described by a random variable. Monte Carlo methods and stochastic collocation are traditional approaches used for this purpose, and multifidelity methods augment these by incorporating low-fidelity models.

Control Variates

A common approach in multifidelity UQ is using control variates. Here, the low-fidelity model acts as an auxiliary variable to reduce the variance of a Monte Carlo estimator. Techniques such as the Multifidelity Monte Carlo method integrate multiple low-fidelity models and adjust their respective contributions to minimize the mean squared error (MSE) of the estimator for a given computational budget. This method capitalizes on the correlation between the outputs of low- and high-fidelity models to enhance estimator efficiency.

Importance Sampling

Another significant area covered is importance sampling where low-fidelity models help in constructing biasing distributions. The multifidelity method evaluates a low-fidelity model at numerous points, builds a biasing distribution, and subsequently uses it to guide high-fidelity model evaluations. This two-step process effectively reduces the variance of the estimator for the event probability or performance metric of interest.

Multifidelity Methods for Statistical Inference

In statistical inference, particularly within a Bayesian framework, multifidelity methods accelerate posterior sampling by coupling low- and high-fidelity models.

Two-Stage MCMC

An illustrative method is the two-stage Markov Chain Monte Carlo (MCMC) approach, which leverages a low-fidelity model in a preliminary stage to screen candidate samples before resorting to high-fidelity evaluations. This leads to significant computational savings while ensuring the accuracy of posterior estimates since only candidates approved by the low-fidelity model proceed to the expensive high-fidelity evaluation stage.

Adaptive Approaches

The paper also explores adaptive MCMC strategies where the low-fidelity models are continuously updated using new high-fidelity evaluations during the sampling process. This adaptation improves low-fidelity approximations in real-time, thus enhancing sampling efficiency and MCMC convergence properties.

Multifidelity Methods for Optimization

Optimization tasks, particularly in engineering and scientific computing, benefit significantly from multifidelity strategies. The paper sheds light on both global and local optimization methods that blend high- and low-fidelity models to find optimal solutions effectively.

Efficient Global Optimization (EGO)

Global optimization methods like Efficient Global Optimization (EGO) iteratively refine surrogate models (e.g., kriging models) to balance exploration and exploitation of the solution space. The expected improvement criterion guides the selection of new sample points, allowing informed updates to the surrogate model with strategic high-fidelity evaluations.

Trust-Region Methods

In local optimization, multifidelity trust-region methods are prominent. These methods adapt low-fidelity models within a defined trust region, ensuring first-order consistency with the high-fidelity model. This consistency guarantees convergence to an optimal solution, effectively balancing computational cost and solution accuracy.

Practical Implications and Future Directions

The survey acknowledges the theoretical guarantees and practical implementations of multifidelity methods. It underscores that a blend of strategic model evaluations and adaptively refined approximations can lead to efficient and accurate results in critical applications like UQ, inference, and optimization.

The paper also emphasizes outstanding challenges. One pertinent issue is the assumption of high-fidelity models as ground truth, overlooking model inadequacy inherent to all approximations. Integrating multifidelity approaches with robust model validation, correction techniques, and probabilistic descriptions of model discrepancies represents a crucial area for future research. Additionally, developing multifidelity frameworks that extend beyond computational models to include expert knowledge, experimental data, and other information sources will broaden the utility and applicability of these methods.

Conclusion

This comprehensive survey elucidates the current landscape of multifidelity methods and their applications in computational science and engineering. By integrating high- and low-fidelity models via adaptive, fusion, and filtering strategies, these methods significantly enhance the efficiency of outer-loop applications while retaining high accuracy. The paper provides a foundation for future research directions, highlighting the potential for broader applications and the integration of multifidelity methods with advanced model validation and data fusion techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Benjamin Peherstorfer (45 papers)
  2. Karen Willcox (24 papers)
  3. Max Gunzburger (51 papers)
Citations (710)