Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration (2403.04629v2)

Published 7 Mar 2024 in cs.LG, cs.AI, cs.HC, cs.RO, and stat.ML

Abstract: Bayesian optimization (BO) with Gaussian processes (GP) has become an indispensable algorithm for black box optimization problems. Not without a dash of irony, BO is often considered a black box itself, lacking ways to provide reasons as to why certain parameters are proposed to be evaluated. This is particularly relevant in human-in-the-loop applications of BO, such as in robotics. We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values.They quantify each parameter's contribution to BO's acquisition function. Exploiting the linearity of Shapley values, we are further able to identify how strongly each parameter drives BO's exploration and exploitation for additive acquisition functions like the confidence bound. We also show that ShapleyBO can disentangle the contributions to exploration into those that explore aleatoric and epistemic uncertainty. Moreover, our method gives rise to a ShapleyBO-assisted human machine interface (HMI), allowing users to interfere with BO in case proposals do not align with human reasoning. We demonstrate this HMI's benefits for the use case of personalizing wearable robotic devices (assistive back exosuits) by human-in-the-loop BO. Results suggest human-BO teams with access to ShapleyBO can achieve lower regret than teams without.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence, 298:103502.
  2. Looping in the human: Collaborative and explainable bayesian optimization. In 27th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 238. PMLR.
  3. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer, 53(8):18–28.
  4. Preference-based assistance optimization for lifting and lowering with a soft back exosuit. Submitted for publication. Manuscript in review as of December 2023.
  5. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256.
  6. Enhanced bayesian optimization via preferential modeling of abstract properties. arXiv preprint arXiv:2402.17343.
  7. Shapley-value-based hybrid metaheuristic multi-objective optimization for energy efficiency in an energy-harvesting cognitive radio network. Mathematics, 11(7):1656.
  8. Explaining reinforcement learning with shapley values. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J., editors, Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 2003–2014. PMLR.
  9. mlrmbo: A modular framework for model-based optimization of expensive black-box functions. arXiv preprint arXiv:1703.03373.
  10. Depth functions for partial orders with a descriptive analysis of machine learning algorithms. In Miranda, E., Montes, I., Quaeghebeur, E., and Vantaggi, B., editors, Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, volume 215 of Proceedings of Machine Learning Research, pages 59–71. PMLR.
  11. Bayesian optimization explains human active search. Advances in Neural Information Processing Systems, 26.
  12. Breiman, L. (2001). Random forests. Machine Learning, 45(1):5–32.
  13. Beyond algorithm aversion in human-machine decision-making. In Judgment in Predictive Analytics, pages 3–26. Springer.
  14. Explainable bayesian optimization. arXiv preprint arXiv:2401.13334.
  15. Post-hoc rule based explanations for black box bayesian optimization. In European Conference on Artificial Intelligence, pages 320–337. Springer.
  16. Preference learning with gaussian processes. In Proceedings of the 22nd International Conference on Machine Learning, pages 137–144.
  17. Hypbo: Expert-guided chemist-in-the-loop bayesian search for new materials. arXiv preprint arXiv:2308.11787.
  18. Croppi, F. (2021). Explaining sequential model-based optimization. Master Thesis, LMU Munich.
  19. Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. The Journal of Machine learning research, 7:1–30.
  20. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1):114.
  21. Human-in-the-loop optimization of hip assistance with a soft exosuit during walking. Science Robotics, 3(15):eaar5438.
  22. Do we need hundreds of classifiers to solve real world classification problems? The journal of machine learning research, 15(1):3133–3181.
  23. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. Journal of Machine Learning Research, 20(177):1–81.
  24. Bayesian optimization for materials design. In Information Science for Materials Discovery and Design, pages 45–75. Springer.
  25. Using the shapley value to analyze algorithm portfolios.
  26. Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5):1189–1232.
  27. Bo-muse: A human expert and ai teaming framework for accelerated experimental design. arXiv preprint arXiv:2303.01684.
  28. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506.
  29. On shapley value interpretability in concept-based learning with formal concept analysis. Annals of Mathematics and Artificial Intelligence, 90(11-12):1197–1222.
  30. Statistical comparisons of classifiers by generalized stochastic dominance. Journal of Machine Learning Research, 24(231):1–37.
  31. Robust statistical comparison of random variables with locally varying scale of measurement. In Conference on Uncertainty in Artificial Intelligence (UAI), volume 216, pages 941–952. PMLR.
  32. Human-in-the-loop bayesian optimization of wearable device parameters. PloS one, 12(9):e0184054.
  33. Bandit based monte-carlo planning. In European Conference on Machine Learning, pages 282–293. Springer.
  34. Loh, W.-Y. (2014). Fifty years of classification and regression trees. International Statistical Review, 82(3):329–348.
  35. A Unified Approach to Interpreting Model Predictions. arXiv preprint arXiv:1705.07874.
  36. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
  37. Efficient sage estimation via causal structure learning. In Ruiz, F., Dy, J., and van de Meent, J.-W., editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 11650–11670. PMLR.
  38. Risk-averse heteroscedastic bayesian optimization. Advances in Neural Information Processing Systems, 34:17235–17245.
  39. Mangili, F. (2016). A prior near-ignorance gaussian process model for nonparametric regression. International Journal of Approximate Reasoning, 78:153–171.
  40. Stochastic analysis and validation under aleatory and epistemic uncertainties. Reliability Engineering & System Safety, 205:107258.
  41. The explanation game: Explaining machine learning models using shapley values. arXiv preprint arXiv:1909.08128.
  42. Močkus, J. (1975). On Bayesian methods for seeking the extremum. In Optimization techniques IFIP technical conference, pages 400–404. Springer.
  43. Molnar, C. (2020). Interpretable machine learning. christophm.github.io.
  44. iml: An r package for interpretable machine learning. Journal of Open Source Software, 3(26):786.
  45. Interpretable machine learning–a brief history, state-of-the-art and challenges. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 417–431. Springer.
  46. Learning de-biased regression trees and forests from complex samples. Machine Learning, pages 1–20.
  47. The utility of explainable ai in ad hoc human-machine teaming. In Advances in Neural Information Processing Systems, volume 34, pages 610–623. Curran Associates, Inc.
  48. Peters, H. (2015). Game theory: A Multi-leveled approach. Springer.
  49. Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons. Journal of Computational Physics, 477:111902.
  50. Pyzer-Knapp, E. O. (2018). Bayesian optimization for accelerated drug discovery. IBM Journal of Research and Development, 62(6):2–1.
  51. R Core Team (2020). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
  52. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.
  53. Rodemann, J. (2021). Robust generalizations of stochastic derivative-free optimization. Master’s thesis, LMU Munich.
  54. Accounting for imprecision of model specification in bayesian optimization. In Poster presented at International Symposium on Imprecise Probabilities (ISIPTA).
  55. Accounting for Gaussian process imprecision in Bayesian optimization. In International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making (IUKM), pages 92–104. Springer.
  56. Partial rankings of optimizers. arXiv preprint arXiv:2402.16565.
  57. Shapley, L. S. (1953). A value for n-person games. Contributions to the Theory of Games, 2(28):307–317.
  58. Opportunities and challenges in the development of exoskeletons for locomotor assistance. Nature Biomedical Engineering, 7(4):456–472.
  59. Practical Bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems, 25:2951–2959.
  60. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE transactions on Information Theory, 58(5):3250–3265.
  61. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3):647–665.
  62. Explainable ai: Using shapley value to explain complex anomaly detection ml-based systems. Machine Learning and Artificial Intelligence, 332:152.
  63. Back-support exoskeletons for occupational use: an overview of technological advances and trends. IISE Transactions on Occupational Ergonomics and Human Factors, 7(3-4):237–249.
  64. Human-ai collaborative Bayesian optimisation. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A., editors, Advances in Neural Information Processing Systems, volume 35, pages 16233–16245.
  65. Adapted techniques of explainable artificial intelligence for explaining genetic algorithms on the example of job scheduling. Expert Systems with Applications.
  66. Explaining predictive uncertainty with information theoretic shapley values. arXiv preprint arXiv:2306.05724.
  67. Wu, H.-C. (2023). Solving fuzzy optimization problems using shapley values and evolutionary algorithms. Mathematics, 11(24):4871.
  68. Shapley-nas: Discovering operation contribution for neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11892–11901.
  69. Human-in-the-loop optimization of exoskeleton assistance during walking. Science, 356(6344):1280–1284.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets