Papers
Topics
Authors
Recent
2000 character limit reached

Looping in the Human Collaborative and Explainable Bayesian Optimization (2310.17273v5)

Published 26 Oct 2023 in cs.LG, cs.HC, and stat.ML

Abstract: Like many optimizers, Bayesian optimization often falls short of gaining user trust due to opacity. While attempts have been made to develop human-centric optimizers, they typically assume user knowledge is well-specified and error-free, employing users mainly as supervisors of the optimization process. We relax these assumptions and propose a more balanced human-AI partnership with our Collaborative and Explainable Bayesian Optimization (CoExBO) framework. Instead of explicitly requiring a user to provide a knowledge model, CoExBO employs preference learning to seamlessly integrate human insights into the optimization, resulting in algorithmic suggestions that resonate with user preference. CoExBO explains its candidate selection every iteration to foster trust, empowering users with a clearer grasp of the optimization. Furthermore, CoExBO offers a no-harm guarantee, allowing users to make mistakes; even with extreme adversarial interventions, the algorithm converges asymptotically to a vanilla Bayesian optimization. We validate CoExBO's efficacy through human-AI teaming experiments in lithium-ion battery design, highlighting substantial improvements over conventional methods. Code is available https://github.com/ma921/CoExBO.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (90)
  1. Masaki Adachi. High-dimensional discrete Bayesian optimization with self-supervised representation learning for data-efficient materials exploration. In NeurIPS 2021 AI for Science Workshop, 2021. doi: https://openreview.net/forum?id=xJhjehqjQeB.
  2. Fast Bayesian inference with batch Bayesian quadrature via kernel recombination. Advances in Neural Information Processing Systems, 35, 2022. doi: https://doi.org/10.48550/arXiv.2206.04734.
  3. SOBER: Highly parallel Bayesian optimization and Bayesian quadrature over discrete and mixed spaces. In ICML 2023 Workshop: Sampling and Optimization in Discrete Space, 2023a. doi: https://doi.org/10.48550/arXiv.2301.11832.
  4. Bayesian model selection of lithium-ion battery models via Bayesian quadrature. IFAC-PapersOnLine, 56(2):10521–10526, 2023b. doi: https://doi.org/10.1016/j.ifacol.2023.10.1073.
  5. Adaptive batch sizes for active learning: A probabilistic numerics approach. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2024. doi: https://doi.org/10.48550/arXiv.2306.05843.
  6. Multi-attribute Bayesian optimization with interactive preference learning. In International Conference on Artificial Intelligence and Statistics, pp.  4496–4507. PMLR, 2020.
  7. qEUBO: A decision-theoretic acquisition function for preferential Bayesian optimization. In International Conference on Artificial Intelligence and Statistics, pp.  1093–1114. PMLR, 2023.
  8. Human-AI collaborative Bayesian Optimisation. Advances in Neural Information Processing Systems, 35:16233–16245, 2022.
  9. Enhanced bayesian optimization via preferential modeling of abstract properties. arXiv preprint arXiv:2402.17343, 2024.
  10. Batch Bayesian optimization via simulation matching. Advances in Neural Information Processing Systems, 23, 2010.
  11. BoTorch: a framework for efficient Monte-Carlo Bayesian optimization. Advances in neural information processing systems, 33:21524–21538, 2020.
  12. Bayesian optimization for choice data. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp.  2272–2279, 2023a.
  13. Learning choice functions with Gaussian processes. In The 39th Conference on Uncertainty in Artificial Intelligence, 2023b.
  14. Random search for hyper-parameter optimization. Journal of machine learning research, 13(2), 2012.
  15. Randomised Gaussian process upper confidence bound for Bayesian optimisation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp.  2284–2290, 2020.
  16. Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020. HAL 02447823, 2020. doi: https://hal.science/hal-02447823.
  17. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324–345, 1952.
  18. Active preference learning with discrete choice data. Advances in neural information processing systems, 20, 2007.
  19. Specific conductance of concentrated solutions of magnesium salts in water-ethanol system. Journal of Chemical and Engineering Data, 17(1):55–59, 1972.
  20. Post-hoc rule based explanations for black box bayesian optimization. In European Conference on Artificial Intelligence, pp. 320–337. Springer, 2023.
  21. Explainable bayesian optimization. arXiv preprint arXiv:2401.13334, 2024.
  22. Deconditional downscaling with Gaussian processes. Advances in Neural Information Processing Systems, 34:17813–17825, 2021a.
  23. Bayesimp: Uncertainty quantification for causal data fusion. Advances in Neural Information Processing Systems, 34:3466–3477, 2021b.
  24. Spectral ranking with covariates. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp.  70–86. Springer, 2022a.
  25. Learning inconsistent preferences with Gaussian processes. In International Conference on Artificial Intelligence and Statistics, pp.  2266–2281. PMLR, 2022b.
  26. RKHS-SHAP: Shapley values for kernel methods. Advances in Neural Information Processing Systems, 35:13050–13063, 2022c.
  27. Explaining the uncertain: Stochastic Shapley values for Gaussian process models. arXiv preprint arXiv:2305.15167, 2023.
  28. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.
  29. Preference learning with Gaussian processes. In Proceedings of the 22nd International Conference on Machine Learning, pp.  137–144, 2005.
  30. HypBO: Expert-guided chemist-in-the-loop Bayesian search for new materials. arXiv preprint arXiv:2308.11787, 2023.
  31. Human strategic steering improves performance of interactive optimization. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp.  293–297, 2020.
  32. Mihai Cucuringu. Sync-rank: Robust ranking, constrained ranking and rank aggregation via eigenvector and SDP synchronization. IEEE Transactions on Network Science and Engineering, 3(1):58–79, 2016.
  33. Mary Cummings. Automation bias in intelligent time critical decision support systems. In AIAA 1st intelligent systems technical conference, pp. 6313, 2004.
  34. Autonomous optimization of non-aqueous Li-ion battery electrolytes via robotic experimentation and machine learning coupling. Nature communications, 13(1):5454, 2022.
  35. Efficient and robust automated machine learning. Advances in neural information processing systems, 28, 2015.
  36. GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration. In Advances in Neural Information Processing Systems, pp. 7576–7586, 2018.
  37. Roman Garnett. Bayesian optimization. Cambridge University Press, 2023.
  38. Statistical methods for eliciting probability distributions. Journal of the American statistical Association, 100(470):680–701, 2005.
  39. Bayesian optimization with unknown constraints. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, pp.  250––259, 2014. doi: https://doi.org/10.48550/arXiv.1403.5607.
  40. Kevin L Gering. Prediction of electrolyte conductivity: results from a generalized molecular model based on ion solvation and a chemical physics framework. Electrochimica Acta, 225:175–189, 2017.
  41. Interactive hyperparameter optimization in multi-objective problems via preference learning. arXiv preprint arXiv:2309.03581, 2023.
  42. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1):121–127, 2012.
  43. Glasses: Relieving the myopia of Bayesian optimisation. In Artificial Intelligence and Statistics, pp.  790–799. PMLR, 2016.
  44. Preferential Bayesian optimization. In Proceedings of the 34th International Conference on Machine Learning, pp.  1282–1291, 2017.
  45. BO-Muse: A human expert and AI teaming framework for accelerated experimental design. arXiv preprint arXiv:2303.01684, 2023.
  46. Positively weighted kernel quadrature via subsampling. Advances in Neural Information Processing Systems, 35:6886–6900, 2022.
  47. Predictive entropy search for Bayesian optimization with unknown constraints. In International conference on machine learning, pp. 1699–1707. PMLR, 2015.
  48. Explaining preferences with shapley values. Advances in Neural Information Processing Systems, 35:27664–27677, 2022.
  49. Bayesian optimization augmented with actively elicited expert knowledge. arXiv preprint arXiv:2208.08742, 2022.
  50. Sequential model-based optimization for general algorithm configuration. In Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5, pp.  507–523. Springer, 2011.
  51. π𝜋\piitalic_πBO: Augmenting acquisition functions with user beliefs for bayesian optimization. In International Conference on Learning Representations, 2022.
  52. Michael I Jordan. Artificial intelligence—the revolution hasn’t happened yet. Harvard Data Science Review, 1(1):1–9, 2019.
  53. On the interpretation of intuitive probability: A reply to Jonathan Cohen. Cognition, 7(4):409–411, 1979.
  54. Convergence guarantees for kernel-based quadrature rules in misspecified settings. Advances in Neural Information Processing Systems, 29, 2016.
  55. Human–machine collaboration for improving semiconductor process development. Nature, 616(7958):707–711, 2023.
  56. Parallelised Bayesian optimisation via Thompson sampling. In International Conference on Artificial Intelligence and Statistics, pp.  133–142. PMLR, 2018.
  57. Cooperative Bayesian optimization for imperfect agents. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp.  475–490. Springer, 2023.
  58. Sequential gallery for interactive visual design optimization. ACM Transactions on Graphics (TOG), 39(4):88–1, 2020.
  59. Explainability constraints for Bayesian optimization. In 6th ICML Workshop on Automated Machine Learning, 2020.
  60. Preference exploration for efficient Bayesian optimization with multiple outcomes. In International Conference on Artificial Intelligence and Statistics, pp.  4235–4258. PMLR, 2022.
  61. On the limited memory BFGS method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
  62. Sparse Bayesian optimization. In International Conference on Artificial Intelligence and Statistics, pp.  3754–3774. PMLR, 2023.
  63. A study of the physical properties of Li-ion battery electrolytes containing esters. Journal of The Electrochemical Society, 165(2):A21, 2018.
  64. R Duncan Luce. On the possible psychophysical laws. Psychological review, 66(2):81, 1959.
  65. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pp. 4765–4774, 2017.
  66. Projective preferential Bayesian optimization. In International Conference on Machine Learning, pp. 6884–6892. PMLR, 2020.
  67. Prior Knowledge Elicitation: The Past, Present, and Future. Bayesian Analysis, pp.  1 – 33, 2023a. doi: 10.1214/23-BA1381.
  68. Multi-fidelity Bayesian optimization with unreliable information sources. In International Conference on Artificial Intelligence and Statistics, pp.  7425–7454. PMLR, 2023b.
  69. Dirichlet-based Gaussian processes for large-scale calibrated classification. Advances in Neural Information Processing Systems, 31, 2018.
  70. Jonas Mockus. The application of Bayesian methods for seeking the extremum. Towards global optimization, 2:117, 1998.
  71. Anthony O’Hagan. Bayes–Hermite quadrature. Journal of statistical planning and inference, 29(3):245–260, 1991.
  72. PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
  73. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.
  74. Incorporating expert prior in Bayesian optimisation via space warping. Knowledge-Based Systems, 195:105663, 2020.
  75. Denise M Rousseau. Schema, promise and mutuality: The building blocks of the psychological contract. Journal of occupational and organizational psychology, 74(4):511–541, 2001.
  76. Lloyd S Shapley. A value for n-person games. Contributions to the Theory of Games, 2(28):307–317, 1953.
  77. Accountability and automation bias. International Journal of Human-Computer Studies, 52(4):701–717, 2000.
  78. Input warping for Bayesian optimization of non-stationary functions. In International Conference on Machine Learning, pp. 1674–1682. PMLR, 2014.
  79. Il’ya Meerovich Sobol’. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 7(4):784–802, 1967.
  80. A general framework for multi-fidelity Bayesian optimization with Gaussian processes. In The 22nd International Conference on Artificial Intelligence and Statistics, pp.  3158–3167. PMLR, 2019.
  81. Bayesian optimization with a prior for the optimum. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part III 21, pp.  265–296. Springer, 2021.
  82. Gaussian process optimization in the bandit setting: No regret and experimental design. In International Conference on International Conference on Machine Learning, pp.  1015–1022, 2010.
  83. S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved October 6, 2023, from http://www.sfu.ca/~ssurjano, 2023.
  84. Randomized Gaussian process upper confidence bound with tight Bayesian regret bounds. In International Conference on Machine Learning, volume 202, pp.  33490–33515, 2023a.
  85. Towards practical preferential Bayesian optimization with skew Gaussian processes. In International Conference on Machine Learning, volume 202, pp.  33516–33533, 2023b.
  86. Bayesian preference learning for interactive multi-objective optimisation. In Proceedings of the Genetic and Evolutionary Computation Conference, pp.  466–475, 2021.
  87. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4):285–294, 1933.
  88. Pre-trained Gaussian processes for Bayesian optimization. arXiv preprint arXiv:2109.08215, 2021.
  89. Gaussian processes for machine learning. MIT press Cambridge, MA, 2006.
  90. Practical multi-fidelity Bayesian optimization for hyperparameter tuning. In Uncertainty in Artificial Intelligence, pp.  788–798. PMLR, 2020.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 21 likes about this paper.