Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
This paper authored by Benjamin Peherstorfer, Karen Willcox, and Max Gunzburger performs a comprehensive survey of multifidelity methods in the contexts of uncertainty propagation, statistical inference, and optimization. The authors review various strategies to utilize multifidelity models—models with varying levels of fidelity and computational cost—to accelerate "outer-loop" applications where repeated model evaluations are required. Their contribution categorizes multifidelity methods into adaptation, fusion, and filtering strategies, and explores their application in different computational settings.
Introduction
In computational science and engineering, models often describe systems of interest with varying levels of approximation. For instance, high-fidelity models offer detailed and precise descriptions but at significant computational costs, whereas low-fidelity models provide quicker, albeit less accurate, solutions. These models are pivotal in outer-loop applications like optimization, uncertainty quantification (UQ), and inference, where repeated model evaluations at various inputs can impose prohibitive computational demands if relying solely on high-fidelity models.
The paper is structured into several sections, beginning with an introduction to multifidelity models. Following this, it delineates the three principal strategies for multifidelity methods: adaptation, fusion, and filtering. Subsequently, it explores detailed discussions pertinent to multifidelity approaches within the realms of uncertainty propagation, inference, and optimization.
Multifidelity Methods for Uncertainty Propagation
For uncertainty propagation, multifidelity methods aim to estimate statistics like the expectation or variance of a model output when the input is described by a random variable. Monte Carlo methods and stochastic collocation are traditional approaches used for this purpose, and multifidelity methods augment these by incorporating low-fidelity models.
Control Variates
A common approach in multifidelity UQ is using control variates. Here, the low-fidelity model acts as an auxiliary variable to reduce the variance of a Monte Carlo estimator. Techniques such as the Multifidelity Monte Carlo method integrate multiple low-fidelity models and adjust their respective contributions to minimize the mean squared error (MSE) of the estimator for a given computational budget. This method capitalizes on the correlation between the outputs of low- and high-fidelity models to enhance estimator efficiency.
Importance Sampling
Another significant area covered is importance sampling where low-fidelity models help in constructing biasing distributions. The multifidelity method evaluates a low-fidelity model at numerous points, builds a biasing distribution, and subsequently uses it to guide high-fidelity model evaluations. This two-step process effectively reduces the variance of the estimator for the event probability or performance metric of interest.
Multifidelity Methods for Statistical Inference
In statistical inference, particularly within a Bayesian framework, multifidelity methods accelerate posterior sampling by coupling low- and high-fidelity models.
Two-Stage MCMC
An illustrative method is the two-stage Markov Chain Monte Carlo (MCMC) approach, which leverages a low-fidelity model in a preliminary stage to screen candidate samples before resorting to high-fidelity evaluations. This leads to significant computational savings while ensuring the accuracy of posterior estimates since only candidates approved by the low-fidelity model proceed to the expensive high-fidelity evaluation stage.
Adaptive Approaches
The paper also explores adaptive MCMC strategies where the low-fidelity models are continuously updated using new high-fidelity evaluations during the sampling process. This adaptation improves low-fidelity approximations in real-time, thus enhancing sampling efficiency and MCMC convergence properties.
Multifidelity Methods for Optimization
Optimization tasks, particularly in engineering and scientific computing, benefit significantly from multifidelity strategies. The paper sheds light on both global and local optimization methods that blend high- and low-fidelity models to find optimal solutions effectively.
Efficient Global Optimization (EGO)
Global optimization methods like Efficient Global Optimization (EGO) iteratively refine surrogate models (e.g., kriging models) to balance exploration and exploitation of the solution space. The expected improvement criterion guides the selection of new sample points, allowing informed updates to the surrogate model with strategic high-fidelity evaluations.
Trust-Region Methods
In local optimization, multifidelity trust-region methods are prominent. These methods adapt low-fidelity models within a defined trust region, ensuring first-order consistency with the high-fidelity model. This consistency guarantees convergence to an optimal solution, effectively balancing computational cost and solution accuracy.
Practical Implications and Future Directions
The survey acknowledges the theoretical guarantees and practical implementations of multifidelity methods. It underscores that a blend of strategic model evaluations and adaptively refined approximations can lead to efficient and accurate results in critical applications like UQ, inference, and optimization.
The paper also emphasizes outstanding challenges. One pertinent issue is the assumption of high-fidelity models as ground truth, overlooking model inadequacy inherent to all approximations. Integrating multifidelity approaches with robust model validation, correction techniques, and probabilistic descriptions of model discrepancies represents a crucial area for future research. Additionally, developing multifidelity frameworks that extend beyond computational models to include expert knowledge, experimental data, and other information sources will broaden the utility and applicability of these methods.
Conclusion
This comprehensive survey elucidates the current landscape of multifidelity methods and their applications in computational science and engineering. By integrating high- and low-fidelity models via adaptive, fusion, and filtering strategies, these methods significantly enhance the efficiency of outer-loop applications while retaining high accuracy. The paper provides a foundation for future research directions, highlighting the potential for broader applications and the integration of multifidelity methods with advanced model validation and data fusion techniques.