TS-RSR: A provably efficient approach for batch Bayesian Optimization (2403.04764v4)
Abstract: This paper presents a new approach for batch Bayesian Optimization (BO) called Thompson Sampling-Regret to Sigma Ratio directed sampling (TS-RSR), where we sample a new batch of actions by minimizing a Thompson Sampling approximation of a regret to uncertainty ratio. Our sampling objective is able to coordinate the actions chosen in each batch in a way that minimizes redundancy between points whilst focusing on points with high predictive means or high uncertainty. Theoretically, we provide rigorous convergence guarantees on our algorithm's regret, and numerically, we demonstrate that our method attains state-of-the-art performance on a range of challenging synthetic and realistic test functions, where it outperforms several competitive benchmark batch BO algorithms.
- SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature over Discrete and Mixed Spaces. arXiv:2301.11832 [cs, math, stat].
- Batch Bayesian Optimization via Simulation Matching.
- Ts-ucb: Improving on thompson sampling with little to no additional computation. In International Conference on Artificial Intelligence and Statistics, pages 11132–11148. PMLR.
- Society of agents: Regret bounds of concurrent thompson sampling. Advances in Neural Information Processing Systems, 35:7587–7598.
- Parallel gaussian process optimization with upper confidence bound and pure exploration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 225–240. Springer.
- Federated bayesian optimization via thompson sampling. Advances in Neural Information Processing Systems, 33:9687–9699.
- Distributed batch gaussian process optimization. In International conference on machine learning, pages 951–960. PMLR.
- Sampling Acquisition Functions for Batch Bayesian Optimization. arXiv:1903.09434 [cs, stat].
- Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research, 15:3873–3923.
- Frazier, P. I. (2018). A tutorial on bayesian optimization. arXiv preprint arXiv:1807.02811.
- Fully distributed bayesian optimization with stochastic policies. arXiv preprint arXiv:1902.09992.
- Fully Distributed Bayesian Optimization with Stochastic Policies. arXiv:1902.09992 [cs, stat].
- Predictive entropy search for multi-objective bayesian optimization with constraints. Neurocomputing, 361:50–68.
- Quantile stein variational gradient descent for batch bayesian optimization. In International Conference on machine learning, pages 2347–2356. PMLR.
- Batch Bayesian Optimization via Local Penalization.
- Entropy search for information-efficient global optimization. Journal of Machine Learning Research, 13(6).
- Predictive entropy search for bayesian optimization with unknown constraints. In International conference on machine learning, pages 1699–1707. PMLR.
- Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In International conference on machine learning, pages 1470–1479. PMLR.
- Hunt, N. (2020). Batch Bayesian optimization. PhD thesis, Massachusetts Institute of Technology.
- Joint entropy search for maximally-informed bayesian optimization. Advances in Neural Information Processing Systems, 35:11494–11506.
- Parallelised bayesian optimisation via thompson sampling. In International Conference on Artificial Intelligence and Statistics, pages 133–142. PMLR.
- On bayesian upper confidence bounds for bandit problems. In Artificial intelligence and statistics, pages 592–600. PMLR.
- Information directed sampling and bandits with heteroscedastic noise. In Conference On Learning Theory, pages 358–384. PMLR.
- Gaussian max-value entropy search for multi-agent bayesian optimization. arXiv preprint arXiv:2303.05694.
- Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes. arXiv:2110.11665 [cs, stat].
- Learning to optimize via information-directed sampling. Advances in Neural Information Processing Systems, 27.
- Parallel predictive entropy search for batch global optimization of expensive objective functions. Advances in neural information processing systems, 28.
- Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995.
- Multi-fidelity bayesian optimization with max-value entropy search and its parallelization. In International Conference on Machine Learning, pages 9334–9345. PMLR.
- On information gain and regret bounds in gaussian process bandits. In International Conference on Artificial Intelligence and Statistics, pages 82–90. PMLR.
- Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press.
- Max-value entropy search for efficient bayesian optimization. In International Conference on Machine Learning, pages 3627–3635. PMLR.
- Optimization as estimation with gaussian processes in bandit settings. In Artificial Intelligence and Statistics, pages 1022–1031. PMLR.
- Expected improvement for expensive optimization: a review. Journal of Global Optimization, 78(3):507–544.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.