Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-fidelity Gaussian Process Bandit Optimisation (1603.06288v4)

Published 20 Mar 2016 in stat.ML, cs.AI, and cs.LG

Abstract: In many scientific and engineering applications, we are tasked with the maximisation of an expensive to evaluate black box function $f$. Traditional settings for this problem assume just the availability of this single function. However, in many cases, cheap approximations to $f$ may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of $f$ in a small but promising region and speedily identify the optimum. We formalise this task as a \emph{multi-fidelity} bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. Empirically, MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kirthevasan Kandasamy (36 papers)
  2. Gautam Dasarathy (38 papers)
  3. Junier B. Oliva (27 papers)
  4. Jeff Schneider (99 papers)
  5. Barnabas Poczos (173 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.