Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Time-Varying Gaussian Process Bandits with Unknown Prior (2402.01632v3)

Published 2 Feb 2024 in cs.LG and stat.ML

Abstract: Bayesian optimisation requires fitting a Gaussian process model, which in turn requires specifying prior on the unknown black-box function -- most of the theoretical literature assumes this prior is known. However, it is common to have more than one possible prior for a given black-box function, for example suggested by domain experts with differing opinions. In some cases, the type-II maximum likelihood estimator for selecting prior enjoys the consistency guarantee, but it does not universally apply to all types of priors. If the problem is stationary, one could rely on the Regret Balancing scheme to conduct the optimisation, but in the case of time-varying problems, such a scheme cannot be used. To address this gap in existing research, we propose a novel algorithm, PE-GP-UCB, which is capable of solving time-varying Bayesian optimisation problems even without the exact knowledge of the function's prior. The algorithm relies on the fact that either the observed function values are consistent with some of the priors, in which case it is easy to reject the wrong priors, or the observations are consistent with all candidate priors, in which case it does not matter which prior our model relies on. We provide a regret bound on the proposed algorithm. Finally, we empirically evaluate our algorithm on toy and real-world time-varying problems and show that it outperforms the maximum likelihood estimator, fully Bayesian treatment of unknown prior and Regret Balancing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Regret balancing for bandit and RL model selection. arXiv preprint arXiv:2006.05491, 2020.
  2. Corralling a band of bandit algorithms. In Conference on Learning Theory, pp.  12–38. PMLR, 2017.
  3. Bachoc, F. Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model misspecification. Computational Statistics & Data Analysis, 66:55–69, 2013.
  4. BoTorch: a framework for efficient Monte-Carlo Bayesian optimization. Advances in neural information processing systems, 33:21524–21538, 2020.
  5. No-regret Bayesian optimization with unknown hyperparameters. The Journal of Machine Learning Research, 20(1):1868–1891, 2019.
  6. Adversarially robust optimization with Gaussian processes. Advances in neural information processing systems, 31, 2018.
  7. Bull, A. D. Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research, 12(10), 2011.
  8. On kernelized multi-armed bandits. In International Conference on Machine Learning, pp.  844–853. PMLR, 2017.
  9. HEBO: Pushing the limits of sample-efficient hyper-parameter optimisation. Journal of Artificial Intelligence Research, 74:1269–1349, 2022.
  10. Beyond UCB: Optimal and efficient contextual bandits with regression oracles. In International Conference on Machine Learning, pp.  3199–3210. PMLR, 2020.
  11. GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration. In Advances in Neural Information Processing Systems, pp.  7576–7586, 2018.
  12. Garnett, R. Bayesian optimization. Cambridge University Press, 2023.
  13. BOiLS: Bayesian optimisation for logic synthesis. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp.  1193–1196. IEEE, 2022.
  14. Provably efficient Bayesian optimization with unbiased Gaussian process hyperparameter estimation. arXiv preprint arXiv:2306.06844, 2023.
  15. Portfolio allocation for Bayesian optimization. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pp.  327–336, 2011.
  16. An optimization-based algorithm for non-stationary kernel bandits without prior knowledge. In International Conference on Artificial Intelligence and Statistics, pp.  3048–3085. PMLR, 2023.
  17. Self-correcting Bayesian optimization through Bayesian active learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  18. High dimensional Bayesian optimisation and bandits via additive models. In International conference on machine learning, pp.  295–304. PMLR, 2015.
  19. Toward real-world automated antibody design with combinatorial Bayesian optimization. Cell Reports Methods, 3(1), 2023.
  20. Bandit algorithms. Cambridge University Press, 2020.
  21. Feature and parameter selection in stochastic linear bandits. In International Conference on Machine Learning, pp.  15927–15958. PMLR, 2022.
  22. Regret bound balancing and elimination for model selection in bandits and RL. arXiv preprint arXiv:2012.13045, 2020.
  23. PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  24. High-dimensional Bayesian optimization via additive models with overlapping groups. In International conference on artificial intelligence and statistics, pp.  298–307. PMLR, 2018.
  25. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on International Conference on Machine Learning, pp.  1015–1022, 2010.
  26. Theoretical analysis of Bayesian optimisation with unknown Gaussian process hyper-parameters. arXiv preprint arXiv:1406.7758, 2014.
  27. Are random decompositions all we need in high dimensional Bayesian optimisation? In International Conference on Machine Learning, pp.  43347–43368. PMLR, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Juliusz Ziomek (11 papers)
  2. Masaki Adachi (15 papers)
  3. Michael A. Osborne (73 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets