Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start (1712.02902v1)

Published 8 Dec 2017 in stat.ML

Abstract: Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization. Typically, BO is powered by a Gaussian process (GP), whose algorithmic complexity is cubic in the number of evaluations. Hence, GP-based BO cannot leverage large amounts of past or related function evaluations, for example, to warm start the BO procedure. We develop a multiple adaptive Bayesian linear regression model as a scalable alternative whose complexity is linear in the number of observations. The multiple Bayesian linear regression models are coupled through a shared feedforward neural network, which learns a joint representation and transfers knowledge across machine learning problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Valerio Perrone (20 papers)
  2. Rodolphe Jenatton (41 papers)
  3. Matthias Seeger (22 papers)
  4. Cedric Archambeau (44 papers)
Citations (24)