Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration (2401.17037v2)

Published 30 Jan 2024 in cs.LG, cs.NA, math.NA, and stat.ML

Abstract: This paper proposes novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models. The new algorithms retain the ease of implementation of the classical GP-UCB algorithm, but the additional random exploration step accelerates their convergence, nearly achieving the optimal convergence rate. Furthermore, to facilitate Bayesian inference with an intractable likelihood, we propose to utilize optimization iterates for maximum a posteriori estimation to build a Gaussian process surrogate model for the unnormalized log-posterior density. We provide bounds for the Hellinger distance between the true and the approximate posterior distributions in terms of the number of design points. We demonstrate the effectiveness of our Bayesian optimization algorithms in non-convex benchmark objective functions, in a machine learning hyperparameter tuning problem, and in a black-box engineering design problem. The effectiveness of our posterior approximation approach is demonstrated in two Bayesian inference problems for parameters of dynamical systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Goal-driven dynamics learning via Bayesian optimization. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pages 5168–5173. IEEE, 2017.
  2. Misspecified Gaussian process bandit optimization. Advances in Neural Information Processing Systems, 34:3004–3015, 2021.
  3. Improving the accuracy and speed of support vector machines. Advances in Neural Information Processing Systems, 9, 1996.
  4. Adam D Bull. Convergence rates of efficient global optimization algorithms. Journal of Machine Learning Research, 12(10), 2011.
  5. On kernelized multi-armed bandits. In International Conference on Machine Learning, pages 844–853. PMLR, 2017.
  6. Bayesian optimization under heavy-tailed payoffs. Advances in Neural Information Processing Systems, 32, 2019.
  7. Engineering design exploration using locally optimized covariance kriging. AIAA Journal, 54(10):3160–3175, 2016.
  8. Exponential regret bounds for Gaussian process bandits with deterministic observations. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 955–962, 2012.
  9. Peter I Frazier. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811, 2018.
  10. Posterior consistency of Gaussian process prior for nonparametric binary regression. The Annals of Statistics, pages 2413–2429, 2006.
  11. Introduction to Gaussian process regression in Bayesian inverse problems, with new results on experimental design for weighted error measures. arXiv preprint arXiv:2302.04518, 2023.
  12. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455, 1998.
  13. Gaussian processes and kernel methods: A review on connections and equivalences. arXiv preprint arXiv:1807.02582, 2018.
  14. Parallelised Bayesian optimisation via Thompson sampling. In International Conference on Artificial Intelligence and Statistics, pages 133–142. PMLR, 2018.
  15. Optimization on manifolds via graph Gaussian processes. arXiv preprint arXiv:2210.10962, 2022.
  16. Efficient batch black-box optimization with deterministic regret bounds. arXiv preprint arXiv:1905.10041, 2019.
  17. Jonas Mockus. The application of Bayesian methods for seeking the extremum. Towards Global Optimization, 2:117, 1998.
  18. Convergence rates for a class of estimators based on Stein’s method. Bernoulli, 25(2):1141–1159, 2019.
  19. Sparse spatial autoregressions. Statistics & Probability Letters, 33(3):291–297, 1997.
  20. Bayesian Optimization with Application to Computer Experiments. Springer, 2021.
  21. Tony Pourmohamad. Compmodels: Pseudo computer models for optimization. R package version 0.2. 0, 2020.
  22. Learning to optimize via information-directed sampling. Advances in Neural Information Processing Systems, 27, 2014.
  23. Statistische Versuchsplanung. Springer, 2010.
  24. Gaussian process optimization in the bandit setting: no regret and experimental design. In Proceedings of the 27th International Conference on Machine Learning, pages 1015–1022, 2010.
  25. Posterior consistency for Gaussian process approximations of Bayesian posterior distributions. Mathematics of Computation, 87(310):721–753, 2018.
  26. Michael L Stein. Interpolation of Spatial Data: Some Theory for Kriging. Springer Science & Business Media, 2012.
  27. Random exploration in Bayesian Optimization: order-optimal regret and computational efficiency. arXiv preprint arXiv:2310.15351, 2023.
  28. Alignment of density maps in Wasserstein distance. arXiv preprint arXiv:2305.12310, 2023.
  29. Aretha L Teckentrup. Convergence of Gaussian process regression with estimated hyper-parameters and applications in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 8(4):1310–1337, 2020.
  30. Kriging prediction with isotropic Matérn correlations: Robustness and experimental designs. Journal of Machine Learning Research, 21(1):7604–7641, 2020.
  31. Sattar Vakili. Open problem: Regret bounds for noise-free kernel-based bandits. In Conference on Learning Theory, pages 5624–5629. PMLR, 2022.
  32. On information gain and regret bounds in Gaussian process bandits. In International Conference on Artificial Intelligence and Statistics, pages 82–90. PMLR, 2021.
  33. Holger Wendland. Scattered Data Approximation, volume 17. Cambridge University Press, 2004.
  34. Gaussian Processes for Machine Learning, volume 2. MIT Press Cambridge, MA, 2006.
  35. Local error estimates for radial basis function interpolation of scattered data. IMA Journal of Numerical Analysis, 13(1):13–27, 1993.
  36. A novel class of stabilized greedy kernel approximation algorithms: Convergence, stability and uniform point distribution. Journal of Approximation Theory, 262:105508, 2021.
  37. On prediction properties of kriging: Uniform error bounds and robustness. Journal of the American Statistical Association, 115(530):920–930, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 5 posts and received 13 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube