Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Gaussian Processes for Multi-fidelity Modeling (1903.07320v1)

Published 18 Mar 2019 in stat.ML and cs.LG

Abstract: Multi-fidelity methods are prominently used when cheaply-obtained, but possibly biased and noisy, observations must be effectively combined with limited or expensive true data in order to construct reliable models. This arises in both fundamental machine learning procedures such as Bayesian optimization, as well as more practical science and engineering applications. In this paper we develop a novel multi-fidelity model which treats layers of a deep Gaussian process as fidelity levels, and uses a variational inference scheme to propagate uncertainty across them. This allows for capturing nonlinear correlations between fidelities with lower risk of overfitting than existing methods exploiting compositional structure, which are conversely burdened by structural assumptions and constraints. We show that the proposed approach makes substantial improvements in quantifying and propagating uncertainty in multi-fidelity set-ups, which in turn improves their effectiveness in decision making pipelines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kurt Cutajar (8 papers)
  2. Mark Pullin (2 papers)
  3. Andreas Damianou (28 papers)
  4. Neil Lawrence (17 papers)
  5. Javier González (44 papers)
Citations (105)

Summary

We haven't generated a summary for this paper yet.