Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$π$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization (2204.11051v1)

Published 23 Apr 2022 in cs.LG and stat.ML

Abstract: Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of ML algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, $\pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when $\pi$BO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that $\pi$BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that $\pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 $\times$ time-to-accuracy speedup over prominent BO approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Carl Hvarfner (10 papers)
  2. Danny Stoll (9 papers)
  3. Artur Souza (5 papers)
  4. Marius Lindauer (71 papers)
  5. Frank Hutter (177 papers)
  6. Luigi Nardi (36 papers)
Citations (63)

Summary

We haven't generated a summary for this paper yet.