Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Constrained Bayesian Optimization with Adaptive Active Learning of Unknown Constraints (2310.08751v1)

Published 12 Oct 2023 in cs.LG and cs.AI

Abstract: Optimizing objectives under constraints, where both the objectives and constraints are black box functions, is a common scenario in real-world applications such as scientific experimental design, design of medical therapies, and industrial process optimization. One popular approach to handling these complex scenarios is Bayesian Optimization (BO). In terms of theoretical behavior, BO is relatively well understood in the unconstrained setting, where its principles have been well explored and validated. However, when it comes to constrained Bayesian optimization (CBO), the existing framework often relies on heuristics or approximations without the same level of theoretical guarantees. In this paper, we delve into the theoretical and practical aspects of constrained Bayesian optimization, where the objective and constraints can be independently evaluated and are subject to noise. By recognizing that both the objective and constraints can help identify high-confidence regions of interest (ROI), we propose an efficient CBO framework that intersects the ROIs identified from each aspect to determine the general ROI. The ROI, coupled with a novel acquisition function that adaptively balances the optimization of the objective and the identification of feasible regions, enables us to derive rigorous theoretical justifications for its performance. We showcase the efficiency and robustness of our proposed CBO framework through empirical evidence and discuss the fundamental challenge of deriving practical regret bounds for CBO algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Wave Energy Converters. UCI Machine Learning Repository, 2019. DOI: https://doi.org/10.24432/C5QS4V.
  2. Admmbo: Bayesian optimization with unknown constraints using admm. J. Mach. Learn. Res., 20(123):1–26, 2019.
  3. Botorch: A framework for efficient monte-carlo bayesian optimization. Advances in neural information processing systems, 33:21524–21538, 2020.
  4. Truncated variance reduction: A unified approach to bayesian optimization and level-set estimation. Advances in neural information processing systems, 29, 2016.
  5. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE transactions on evolutionary computation, 18(4):577–601, 2013.
  6. Scalable constrained bayesian optimization. In International Conference on Artificial Intelligence and Statistics, pages 730–738. PMLR, 2021.
  7. Scalable global optimization via local bayesian optimization. Advances in neural information processing systems, 32, 2019.
  8. Peter I Frazier. A tutorial on bayesian optimization. arXiv preprint arXiv:1807.02811, 2018.
  9. Bayesian optimization with inequality constraints. In ICML, volume 2014, pages 937–945, 2014.
  10. Bayesian optimization with unknown constraints. arXiv preprint arXiv:1403.5607, 2014.
  11. Active learning for level set estimation. In Proceedings of the Twenty-Third international joint conference on Artificial Intelligence, pages 1344–1350, 2013.
  12. Modeling an augmented lagrangian for blackbox constrained optimization. Technometrics, 58(1):1–11, 2016.
  13. Predictive entropy search for efficient global optimization of black-box functions. Advances in neural information processing systems, 27, 2014.
  14. Predictive entropy search for bayesian optimization with unknown constraints. In International conference on machine learning, pages 1699–1707. PMLR, 2015.
  15. BK Kannan and Steven N Kramer. An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. 1994.
  16. Consistency of gaussian process regression in metric spaces. The Journal of Machine Learning Research, 22(1):11066–11092, 2021.
  17. Bridging offline and online experimentation: Constraint active search for deployed performance optimization. Transactions on Machine Learning Research, 2022.
  18. Mixed integer-discrete-continuous optimization by differential evolution. In Proceedings of the 5th international conference on soft computing, pages 71–76. Citeseer, 1999.
  19. Constrained bayesian optimization with noisy experiments. Bayesian Analysis, 14(2):495–519, 2019.
  20. Risk-averse heteroscedastic bayesian optimization. Advances in Neural Information Processing Systems, 34:17235–17245, 2021.
  21. Beyond the pareto efficient frontier: Constraint active search for multiobjective experimental design. In International Conference on Machine Learning, pages 7423–7434. PMLR, 2021.
  22. An information-theoretic framework for unifying active learning problems. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 9126–9134, 2021.
  23. Constrained bayesian optimization with max-value entropy search. arXiv preprint arXiv:1910.07003, 2019.
  24. Bayesian optimization under mixed constraints with a slack-variable augmented lagrangian. Advances in neural information processing systems, 29, 2016.
  25. H Pohlheim. Geatbx examples examples of objective functions (2006). URL http://www. geatbx. com/download/GEATbx_ObjFunExpl_v37. pdf.
  26. Gaussian Processes for Machine Learning. MIT Press, 2006.
  27. Leonard Andreevič Rastrigin. Systems of extremal control. Nauka, 1974.
  28. Global versus local search in constrained optimization of computer models. Lecture notes-monograph series, pages 11–25, 1998.
  29. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
  30. Safe exploration for optimization with gaussian processes. In International conference on machine learning, pages 997–1005. PMLR, 2015.
  31. Sequential and parallel constrained max-value entropy search via information lower bound. In International Conference on Machine Learning, pages 20960–20986. PMLR, 2022.
  32. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 89:106078, 2020.
  33. Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization. In International Conference on Machine Learning, pages 3627–3635. PMLR, 2017.
  34. Learning regions of interest for bayesian optimization with adaptive level-set estimation, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Fengxue Zhang (6 papers)
  2. Zejie Zhu (1 paper)
  3. Yuxin Chen (195 papers)
Citations (2)