Expected Coordinate Improvement for High-Dimensional Bayesian Optimization (2404.11917v1)
Abstract: Bayesian optimization (BO) algorithm is very popular for solving low-dimensional expensive optimization problems. Extending Bayesian optimization to high dimension is a meaningful but challenging task. One of the major challenges is that it is difficult to find good infill solutions as the acquisition functions are also high-dimensional. In this work, we propose the expected coordinate improvement (ECI) criterion for high-dimensional Bayesian optimization. The proposed ECI criterion measures the potential improvement we can get by moving the current best solution along one coordinate. The proposed approach selects the coordinate with the highest ECI value to refine in each iteration and covers all the coordinates gradually by iterating over the coordinates. The greatest advantage of the proposed ECI-BO (expected coordinate improvement based Bayesian optimization) algorithm over the standard BO algorithm is that the infill selection problem of the proposed algorithm is always a one-dimensional problem thus can be easily solved. Numerical experiments show that the proposed algorithm can achieve significantly better results than the standard BO algorithm and competitive results when compared with five state-of-the-art high-dimensional BOs. This work provides a simple but efficient approach for high-dimensional Bayesian optimization.
- J. Mockus, Application of bayesian approach to numerical methods of global and stochastic optimization, Journal of Global Optimization 4 (1994) 347–365. doi:https://doi.org/10.1007/bf01099263.
- Efficient global optimization of expensive black-box functions, Journal of Global Optimization 13 (1998) 455–492. doi:https://doi.org/10.1023/A:1008306431147.
- D. R. Jones, A taxonomy of global optimization methods based on response surfaces, Journal of Global Optimization 21 (2001) 345–383. doi:https://doi.org/10.1023/A:1012771025575.
- Practical Bayesian optimization of machine learning algorithms, in: Advances in Neural Information Processing Systems, 2012, pp. 2951–2959.
- Taking the human out of the loop: A review of bayesian optimization, Proceedings of the IEEE 104 (2016) 148–175. doi:https://doi.org/10.1109/JPROC.2015.2494218.
- Expensive multiobjective optimization by MOEA/D with Gaussian process model, IEEE Transactions on Evolutionary Computation 14 (2010) 456–474. doi:https://doi.org/10.1109/TEVC.2009.2033671.
- Multi-objective bayesian global optimization using expected hypervolume improvement gradient, Swarm and Evolutionary Computation 44 (2019) 945–956. doi:https://doi.org/10.1016/j.swevo.2018.10.007.
- M. Han, L. Ouyang, A novel bayesian approach for multi-objective stochastic simulation optimization, Swarm and Evolutionary Computation 75 (2022) 101192. doi:https://doi.org/10.1016/j.swevo.2022.101192.
- Parallel surrogate-assisted optimization: Batched bayesian neural network-assisted ga versus q-ego, Swarm and Evolutionary Computation 57 (2020) 100717. doi:https://doi.org/10.1016/j.swevo.2020.100717.
- Batch bayesian optimization with adaptive batch acquisition functions via multi-objective optimization, Swarm and Evolutionary Computation 79 (2023) 101293. doi:https://doi.org/10.1016/j.swevo.2023.101293.
- A fast multipoint expected improvement for parallel expensive optimization, IEEE Transactions on Evolutionary Computation 27 (2023) 170–184. doi:https://doi.org/10.1109/TEVC.2022.3168060.
- M. Binois, N. Wycoff, A survey on high-dimensional Gaussian process modeling with application to Bayesian optimization, ACM Transactions on Evolutionary Learning and Optimization 2 (2022) 1–26. doi:https://doi.org/10.1145/3545611.
- Variable selection for gaussian process models in computer experiments, Technometrics 48 (2006) 478–490. doi:https://doi.org/10.1198/004017006000000228.
- Joint optimization and variable selection of high-dimensional Gaussian processes, in: International Conference on Machine Learning, 2012, p. 1379–1386.
- Indicator-based Bayesian variable selection for Gaussian process models in computer experiments, Computational Statistics & Data Analysis 185 (2023) 107757. doi:https://doi.org/10.1016/j.csda.2023.107757.
- Bayesian optimization with dimension scheduling: Application to biological systems, Computer Aided Chemical Engineering 38 (2016) 1051–1056. doi:https://doi.org/10.1016/B978-0-444-63428-3.50180-6.
- High dimensional Bayesian optimization using dropout, in: International Joint Conference on Artificial Intelligence, 2017, pp. 2096–2102.
- Monte carlo tree search based variable selection for high dimensional bayesian optimization, in: Advances in Neural Information Processing Systems, volume 35, 2022, pp. 28488–28501.
- Bayesian optimization in a billion dimensions via random embeddings, Journal of Artificial Intelligence Research 55 (2016) 361–387. doi:https://doi.org/10.1613/jair.4806.
- A warped kernel improving robustness in Bayesian optimization via random embeddings, in: International Conference on Learning and Intelligent Optimization, 2015, pp. 281–286. doi:https://doi.org/10.1007/978-3-319-19084-6_28.
- A framework for Bayesian optimization in embedded subspaces, in: International Conference on Machine Learning, volume 97, 2019, pp. 4752–4761.
- On the choice of the low-dimensional domain for global optimization via random embeddings, Journal of Global Optimization 76 (2020) 69–90. doi:https://doi.org/10.1007/s10898-019-00839-1.
- Re-examining linear embeddings for high-dimensional Bayesian optimization, in: Advances in Neural Information Processing Systems, volume 33, 2020, pp. 1546–1558.
- Increasing the scope as you learn: Adaptive bayesian optimization in nested subspaces, in: Advances in Neural Information Processing Systems, 2022, pp. 11586–11601.
- Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings, in: International Joint Conference on Artificial Intelligence, 2016, pp. 1946–1952.
- Improving Kriging surrogates of high-dimensional design models by partial least squares dimension reduction, Structural and Multidisciplinary Optimization 53 (2016) 935–952. doi:https://doi.org/10.1007/s00158-015-1395-9.
- High dimensional Bayesian optimization via supervised dimension reduction, in: International Joint Conference on Artificial Intelligence, 2019, pp. 4292–4298.
- High-dimensional bayesian optimization using low-dimensional feature spaces, Machine Learning 109 (2020) 1925–1943. doi:https://doi.org/10.1007/s10994-020-05899-z.
- R.-R. Griffiths, J. M. Hernández-Lobato, Constrained Bayesian optimization for automatic chemical design using variational autoencoders, Chemical science 11 (2020) 577–586. doi:https://doi.org/10.1039/C9SC04026A.
- Good practices for Bayesian optimization of high dimensional structured spaces, Applied AI Letters 2 (2021) e24. doi:https://doi.org/10.1002/ail2.24.
- Local latent space Bayesian optimization over structured inputs, in: Advances in Neural Information Processing Systems, volume 35, 2022, pp. 34505–34518.
- High dimensional Bayesian optimisation and bandits via additive models, in: International Conference on Machine Learning, 2015, p. 295–304.
- High dimensional Bayesian optimization via restricted projection pursuit models, in: International Conference on Artificial Intelligence and Statistics, volume 51, 2016, pp. 884–892.
- High-dimensional Bayesian optimization via additive models with overlapping groups, in: International Conference on Artificial Intelligence and Statistics, volume 84, 2018, pp. 298–307.
- Batched high-dimensional Bayesian optimization via structural kernel learning, in: International Conference on Machine Learning, volume 70, 2017, pp. 3656–3664.
- Scalable global optimization via local Bayesian optimization, in: Advances in Neural Information Processing Systems, volume 32, 2019, pp. 5496–5507.
- BOCK : Bayesian optimization with cylindrical kernels, in: International Conference on Machine Learning, volume 80, 2018, pp. 3868–3877.
- Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces, in: International Conference on Machine Learning, volume 97, 2019, pp. 3429–3438.
- P. I. Frazier, A tutorial on Bayesian optimization, arXiv:.02811 (2018). doi:https://doi.org/10.48550/arXiv.1807.02811.
- Recent advances in Bayesian optimization, ACM Computing Surveys 55 (2023) Article 287. doi:https://doi.org/10.1145/3582078.
- A knowledge-gradient policy for sequential information collection, SIAM Journal on Control and Optimization 47 (2008) 2410–2439. doi:https://doi.org/10.1137/070693424.
- The knowledge-gradient policy for correlated normal beliefs, INFORMS Journal on Computing 21 (2009) 599–613. doi:https://doi.org/10.1287/ijoc.1080.0314.
- P. Hennig, C. J. Schuler, Entropy search for information-efficient global optimization, Journal of Machine Learning Research 13 (2012) 1809–1837.
- Predictive entropy search for efficient global optimization of black-box functions, in: Advances in Neural Information Processing Systems, 2014, pp. 918–926.
- D. Zhan, H. Xing, Expected improvement for expensive optimization: a review, Journal of Global Optimization 78 (2020) 507–544. doi:https://doi.org/10.1007/s10898-020-00923-x.
- Dawei Zhan (5 papers)