Emergent Mind

Explainable Bayesian Optimization

(2401.13334)
Published Jan 24, 2024 in cs.LG and cs.AI

Abstract

In industry, Bayesian optimization (BO) is widely applied in the human-AI collaborative parameter tuning of cyber-physical systems. However, BO's solutions may deviate from human experts' actual goal due to approximation errors and simplified objectives, requiring subsequent tuning. The black-box nature of BO limits the collaborative tuning process because the expert does not trust the BO recommendations. Current explainable AI (XAI) methods are not tailored for optimization and thus fall short of addressing this gap. To bridge this gap, we propose TNTRules (TUNE-NOTUNE Rules), a post-hoc, rule-based explainability method that produces high quality explanations through multiobjective optimization. Our evaluation of benchmark optimization problems and real-world hyperparameter optimization tasks demonstrates TNTRules' superiority over state-of-the-art XAI methods in generating high quality explanations. This work contributes to the intersection of BO and XAI, providing interpretable optimization techniques for real-world applications.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Sign up for a free account or log in to generate a summary of this paper:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

References
  1. Consistent sufficient explanations and minimal local rules for explaining the decision of any classifier or regressor. In Advances in Neural Information Processing Systems, NeurIPS. Curran Associates, Inc.
  2. Data mining with decision trees and decision rules. Future generation computer systems, 13(2-3):197–210
  3. J. Blank and K. Deb. pymoo: Multi-objective optimization in python. IEEE Access, 8:89497–89509
  4. Post-hoc Rule Based Explanations for Black Box Bayesian Optimization, volume CCIS, 1948 of ECAI 2023 International Workshops. Springer, Kraków, Poland
  5. Distilling deep reinforcement learning policies in soft decision trees. In Proceedings of the workshop on explainable artificial intelligence, IJCAI Workshop, pages 1–6. IJCAI
  6. Extracting tree-structured representations of trained networks. Advances in neural information processing systems, 8
  7. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2):182–197
  8. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142
  9. Challenges in drift-diffusion semiconductor simulations. In Robert Klöfkorn, Eirik Keilegavlen, Florin A. Radu, and Jürgen Fuhrmann, editors, Finite Volumes for Complex Applications IX - Methods, Theoretical Aspects, Examples, pages 615–623, Cham, 2020. Springer International Publishing.
  10. Learning to scaffold: Optimizing model explanations for teaching. Advances in Neural Information Processing Systems, 35:36108–36122
  11. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42
  12. Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research, 24(34):1–11
  13. Statistical analysis of a hierarchical clustering algorithm with outliers. Journal of Multivariate Analysis, 192:105075
  14. Learning multiple layers of features from tiny images. 2009.
  15. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell., 2(1):56–67
  16. Rulexai - A package for rule-based explanations of machine learning model. SoftwareX, 20:101209
  17. Sudhanshu K Mishra. Some new test functions for global optimization and performance of repulsive particle swarm method. Available at SSRN 926132
  18. Automatic Rule Extraction from Long Short Term Memory Networks
  19. Algorithms for hierarchical clustering: an overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(1):86–97
  20. Ward's Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm
  21. Online auto-tuning method in field-orientation-controlled induction motor driving inertial load. IEEE Open Journal of Industry Applications, 3:125–140
  22. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv.
  23. Data-efficient autotuning with bayesian optimization: An industrial control study. IEEE Transactions on Control Systems Technology, 28(3):730–740
  24. Optimizing explanations by network canonization and hyperparameter search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3818–3827
  25. Boa: The bayesian optimization algorithm. In Proceedings of the genetic and evolutionary computation conference GECCO-99, volume 1. Citeseer
  26. Comprehensive survey on hierarchical clustering algorithms and the recent developments. Artificial Intelligence Review, 56(8):8219–8264
  27. Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press
  28. ”why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, SIGKDD, pages 1135–1144. ACM
  29. Accounting for gaussian process imprecision in bayesian optimization. In International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, pages 92–104. Springer
  30. Cynthia Rudin. Why black box machine learning should be avoided for high-stakes decisions, in brief. Nature Reviews Methods Primers, 2(1):81
  31. Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms. In 16th IEEE international conference on tools with artificial intelligence, pages 576–584. IEEE
  32. Gradient-based explanations for Gaussian Process regression and classification models
  33. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175
  34. ZK Silagadze. Finding two-dimensional peaks. physics of Particles and Nuclei Letters, 4:73–80
  35. Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25
  36. Human-in-the-loop assisted de novo molecular design. Journal of Cheminformatics, 14(1):1–16
  37. Evaluating XAI: A comparison of rule-based and example-based explanations. Artif. Intell., 291:103404
  38. Falling rule lists. In Artificial intelligence and statistics, pages 1013–1022. PMLR
  39. Chapter 3 - output: Knowledge representation. In Data Mining (Fourth Edition), pages 67–89. Morgan Kaufmann, fourth edition edition
  40. Zixu Wu et al. Prediction of california house price based on multiple linear regression. Academic Journal of Engineering and Technology Science, 3(7.0)
  41. Uncertainty and sensitivity analysis for models with correlated parameters. Reliability Engineering & System Safety, 93(10):1563–1573
  42. A literature survey on association rule mining algorithms. Southeast Europe Journal of soft computing, 5(1)

Show All 42