Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bayesian Optimization with Formal Safety Guarantees via Online Conformal Prediction (2306.17815v3)

Published 30 Jun 2023 in cs.LG, cs.IT, eess.SP, and math.IT

Abstract: Black-box zero-th order optimization is a central primitive for applications in fields as diverse as finance, physics, and engineering. In a common formulation of this problem, a designer sequentially attempts candidate solutions, receiving noisy feedback on the value of each attempt from the system. In this paper, we study scenarios in which feedback is also provided on the safety of the attempted solution, and the optimizer is constrained to limit the number of unsafe solutions that are tried throughout the optimization process. Focusing on methods based on Bayesian optimization (BO), prior art has introduced an optimization scheme -- referred to as SAFEOPT -- that is guaranteed not to select any unsafe solution with a controllable probability over feedback noise as long as strict assumptions on the safety constraint function are met. In this paper, a novel BO-based approach is introduced that satisfies safety requirements irrespective of properties of the constraint function. This strong theoretical guarantee is obtained at the cost of allowing for an arbitrary, controllable but non-zero, rate of violation of the safety constraint. The proposed method, referred to as SAFE-BOCP, builds on online conformal prediction (CP) and is specialized to the cases in which feedback on the safety constraint is either noiseless or noisy. Experimental results on synthetic and real-world data validate the advantages and flexibility of the proposed SAFE-BOCP.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. R. O. Michaud and R. O. Michaud, Efficient Asset Management: A Practical Guide to Stock Portfolio Optimization and Asset Allocation. Oxford University Press, 2008.
  2. Y. Wang, R. Li, H. Dong, Y. Ma, J. Yang, F. Zhang, J. Zhu, and S. Li, “Capacity planning and optimization of business park-level integrated energy system based on investment constraints,” Energy, vol. 189, p. 116345, 2019.
  3. S. Xu, J. Li, P. Cai, X. Liu, B. Liu, and X. Wang, “Self-improving photosensitizer discovery system via Bayesian search with first-principle simulations,” Journal of the American Chemical Society, vol. 143, no. 47, pp. 19769–19777, 2021.
  4. C. L. Cortes, P. Lefebvre, N. Lauk, M. J. Davis, N. Sinclair, S. K. Gray, and D. Oblak, “Sample-efficient adaptive calibration of quantum networks using Bayesian optimization,” Phys. Rev. Appl., vol. 17, p. 034067, Mar 2022.
  5. W. Zhang, M. Derakhshani, G. Zheng, C. S. Chen, and S. Lambotharan, “Bayesian optimization of queuing-based multi-channel URLLC scheduling,” IEEE Transactions on Wireless Communications, 2022.
  6. Y. Zhang, O. Simeone, S. T. Jose, L. Maggi, and A. Valcarce, “Bayesian and multi-armed contextual meta-optimization for efficient wireless radio resource management,” IEEE Transactions on Cognitive Communications and Networking, pp. 1–1, 2023.
  7. Y. Sui, A. Gotovos, J. Burdick, and A. Krause, “Safe exploration for optimization with Gaussian processes,” in Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015.
  8. F. Berkenkamp, A. Krause, and A. P. Schoellig, “Bayesian optimization with safety constraints: Safe and automatic parameter tuning in robotics,” Machine Learning, pp. 1–35, 2021.
  9. M. Turchetta, F. Berkenkamp, and A. Krause, “Safe exploration for interactive machine learning,” in Proceedings of Advances in Neural Information Processing Systems, Vancouver, Canada, 2019.
  10. Y. Sui, V. Zhuang, J. Burdick, and Y. Yue, “Stagewise safe Bayesian optimization with Gaussian processes,” in Proceedings of International Conference on Machine Learning, Stockholm, Sweden, 2018.
  11. J. Rothfuss, C. Koenig, A. Rupenyan, and A. Krause, “Meta-learning priors for safe Bayesian optimization,” in Proceedings of Conference on Robot Learning, Atlanta, GA, USA, 2023.
  12. Y. Bengio, S. Lahlou, T. Deleu, E. J. Hu, M. Tiwari, and E. Bengio, “Gflownet foundations,” arXiv preprint arXiv:2111.09266, 2021.
  13. A. Slivkins et al., “Introduction to multi-armed bandits,” Foundations and Trends® in Machine Learning, vol. 12, no. 1-2, pp. 1–286, 2019.
  14. J. Mockus, “Global optimization and the Bayesian approach,” Bayesian Approach to Global Optimization: Theory and Applications, pp. 1–3, 1989.
  15. P. I. Frazier, “A tutorial on Bayesian optimization,” arXiv preprint arXiv:1807.02811, 2018.
  16. L. Maggi, A. Valcarce, and J. Hoydis, “Bayesian optimization for radio resource management: Open loop power control,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, pp. 1858–1871, 2021.
  17. D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, and M. Poloczek, “Scalable global optimization via local Bayesian optimization,” in Proceedings of Advances in Neural Information Processing Systems, Vancouver, Canada, 2019.
  18. I. Gibbs and E. Candes, “Adaptive conformal inference under distribution shift,” in Proceedings of Advances in Neural Information Processing Systems, Virtual, 2021.
  19. S. Feldman, S. Bates, and Y. Romano, “Conformalized online learning: Online calibration without a holdout set,” arXiv preprint arXiv:2205.09095, 2022.
  20. L. Chen, S. T. Jose, I. Nikoloska, S. Park, T. Chen, O. Simeone, et al., “Learning with limited samples: Meta-learning and applications to communication systems,” Foundations and Trends® in Signal Processing, vol. 17, no. 2, pp. 79–208, 2023.
  21. Springer, 2005.
  22. A. N. Angelopoulos and S. Bates, “A gentle introduction to conformal prediction and distribution-free uncertainty quantification,” Foundations and Trends® in Machine Learning, vol. 16, no. 4, pp. 494–591, 2023.
  23. S. Stanton, W. Maddox, and A. G. Wilson, “Bayesian optimization with conformal prediction sets,” in Proceedings of International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 2023.
  24. Springer, 2004.
  25. N. Srinivas, A. Krause, S. M. Kakade, and M. W. Seeger, “Information-theoretic regret bounds for Gaussian process optimization in the bandit setting,” IEEE Transactions on Information Theory, vol. 58, no. 5, pp. 3250–3265, 2012.
  26. F. Harper and J. Konstan, “The movielens datasets: History and context,” ACM Transactions on Interactive Intelligent Systems (TIIS), vol. 5, no. 4, 2016.
  27. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems, Denver, USA, 2000.
  28. P.-L. Kang, C. Shang, and Z.-P. Liu, “Glucose to 5-hydroxymethylfurfural: Origin of site-selectivity resolved by machine learning based reaction sampling,” Journal of the American Chemical Society, vol. 141, no. 51, pp. 20525–20536, 2019.
  29. D. Widmer, D. Kang, B. Sukhija, J. Hübotter, A. Krause, and S. Coros, “Tuning legged locomotion controllers via safe Bayesian optimization,” arXiv preprint arXiv:2306.07092, 2023.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com