Emergent Mind

Abstract

Bayesian Optimization (BO) is typically used to optimize an unknown function $f$ that is noisy and costly to evaluate, by exploiting an acquisition function that must be maximized at each optimization step. Even if provably asymptotically optimal BO algorithms are efficient at optimizing low-dimensional functions, scaling them to high-dimensional spaces remains an open problem, often tackled by assuming an additive structure for $f$. By doing so, BO algorithms typically introduce additional restrictive assumptions on the additive structure that reduce their applicability domain. This paper contains two main contributions: (i) we relax the restrictive assumptions on the additive structure of $f$ without weakening the maximization guarantees of the acquisition function, and (ii) we address the over-exploration problem for decentralized BO algorithms. To these ends, we propose DuMBO, an asymptotically optimal decentralized BO algorithm that achieves very competitive performance against state-of-the-art BO algorithms, especially when the additive structure of $f$ comprises high-dimensional factors.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a detailed summary of this paper with a premium account.

We ran into a problem analyzing this paper.

Please try again later (sorry!).

Get summaries of trending AI papers delivered straight to your inbox

Unsubscribe anytime.

References
  1. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In International conference on machine learning, pages 115–123. PMLR
  2. Automatic gait optimization with gaussian process regression. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, page 944–949, San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc.
  3. Automated antenna design with evolutionary algorithms. In American Institute of Aeronautics and Astronautics
  4. Bayesian optimization for synthetic gene design. In NIPS Workshop on Bayesian Optimization in Academia and Industry
  5. Bayesian optimization in high-dimensions via random embeddings. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence
  6. High dimensional bayesian optimisation and bandits via additive models. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 295–304, Lille, France, 07–09 Jul 2015. PMLR.
  7. Decentralized high-dimensional bayesian optimization with factor graphs. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EAAI’18. AAAI Press
  8. Efficient high dimensional bayesian optimization with additivity and quadrature fourier features. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.
  9. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250–3265
  10. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492
  11. Jonas Mockus. Application of bayesian approach to numerical methods of global and stochastic optimization. Journal of Global Optimization, 4:347–365
  12. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res., 3:397–422, mar 2003.
  13. Scalable global optimization via local bayesian optimization. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
  14. Bayesian optimization in a billion dimensions via random embeddings. Journal of Artificial Intelligence Research, 55:361–387
  15. Re-examining linear embeddings for high-dimensional bayesian optimization. Advances in neural information processing systems, 33:1546–1558
  16. High-dimensional Bayesian optimization with sparse axis-aligned subspaces. In Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, volume 161 of Proceedings of Machine Learning Research, pages 493–503. PMLR, 27–30 Jul 2021.
  17. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268–276
  18. High-dimensional bayesian optimization using low-dimensional feature spaces. Machine Learning, 109(9):1925–1943
  19. High dimensional bayesian optimization using dropout. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 2096–2102
  20. Monte carlo tree search based variable selection for high dimensional bayesian optimization. In Advances in Neural Information Processing Systems
  21. Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient Bayesian optimization. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3627–3635. PMLR, 06–11 Aug 2017.
  22. Bounded approximate decentralised coordination via the max-sum algorithm. Artificial Intelligence, 175(2):730–759
  23. Discovering and exploiting additive structure for bayesian optimization. In Artificial Intelligence and Statistics, pages 1311–1319. PMLR
  24. Inspire: Distributed bayesian optimization for improving spatial reuse in dense wlans. In Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems, pages 133–142
  25. Modeling and forecasting building energy consumption: A review of data-driven techniques. Sustainable Cities and Society, 48:101533
  26. Throughput maximization for uav-enabled wireless powered communication networks. IEEE Internet of Things Journal, 6(2):1690–1703
  27. Improving gp-ucb algorithm by harnessing decomposed feedback. In Machine Learning and Knowledge Discovery in Databases, pages 555–569, Cham, 2020. Springer International Publishing.
  28. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40
  29. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Now Foundations and Trends
  30. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings
  31. Parallel algorithms for constrained tensor factorization via alternating direction method of multipliers. IEEE Transactions on Signal Processing, 63(20):5450–5463
  32. A splitting method for orthogonality constrained problems. Journal of Scientific Computing, 58(2):431–449
  33. A nonconvex admm algorithm for group sparsity with sparse groups. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6009–6013. IEEE
  34. Global convergence of admm in nonconvex nonsmooth optimization. J. Sci. Comput., 78(1):29–63, jan 2019.
  35. JHB Kemperman. On the shannon capacity of an arbitrary channel. In Indagationes Mathematicae (Proceedings), volume 77, pages 101–115. North-Holland
  36. Multiple queries as bandit arms. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 1089–1098
  37. Erik A. Daxberger and Bryan Kian Hsiang Low. Distributed batch Gaussian process optimization. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 951–960. PMLR, 06–11 Aug 2017.
  38. Gaussian Process-Based Decentralized Data Fusion and Active Sensing for Mobility-on-Demand System
  39. Batched high-dimensional Bayesian optimization via structural kernel learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 3656–3664. PMLR, 06–11 Aug 2017.
  40. Additive gaussian processes revisited. In International Conference on Machine Learning, pages 14358–14383. PMLR
  41. Additive covariance kernels for high-dimensional gaussian process modeling. In Annales de la Faculté des sciences de Toulouse: Mathématiques, volume 21, pages 481–499
  42. Additive gaussian processes. Advances in neural information processing systems, 24
  43. Metropolis–hastings algorithms. Introducing Monte Carlo Methods with R, pages 167–197
  44. Linear convergence of adaptive stochastic gradient descent. In International conference on artificial intelligence and statistics, pages 1475–1485. PMLR
  45. The clustering of galaxies in the SDSS-III baryon oscillation spectroscopic survey: single-probe measurements and the strong power of f(z) 8(z) on constraining dark energy. Monthly Notices of the Royal Astronomical Society, 433(4):3559–3571, jul 2013.
  46. CosmoSIS: Modular cosmological parameter estimation. Astronomy and Computing, 12:45–59, sep 2015.
  47. The ns3 Project. The Network Simulator ns-3. https://www.nsnam.org/. Accessed: 2021-09-30.

  48. Batched large-scale bayesian optimization in high-dimensional spaces. In International Conference on Artificial Intelligence and Statistics, pages 745–754. PMLR

Show All 48