Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Bayesian Optimization (2401.13334v2)

Published 24 Jan 2024 in cs.LG and cs.AI

Abstract: Manual parameter tuning of cyber-physical systems is a common practice, but it is labor-intensive. Bayesian Optimization (BO) offers an automated alternative, yet its black-box nature reduces trust and limits human-BO collaborative system tuning. Experts struggle to interpret BO recommendations due to the lack of explanations. This paper addresses the post-hoc BO explainability problem for cyber-physical systems. We introduce TNTRules (Tune-No-Tune Rules), a novel algorithm that provides both global and local explanations for BO recommendations. TNTRules generates actionable rules and visual graphs, identifying optimal solution bounds and ranges, as well as potential alternative solutions. Unlike existing explainable AI (XAI) methods, TNTRules is tailored specifically for BO, by encoding uncertainty via a variance pruning technique and hierarchical agglomerative clustering. A multi-objective optimization approach allows maximizing explanation quality. We evaluate TNTRules using established XAI metrics (Correctness, Completeness, and Compactness) and compare it against adapted baseline methods. The results demonstrate that TNTRules generates high-fidelity, compact, and complete explanations, significantly outperforming three baselines on 5 multi-objective testing functions and 2 hyperparameter tuning problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. Consistent sufficient explanations and minimal local rules for explaining the decision of any classifier or regressor. In Advances in Neural Information Processing Systems, NeurIPS. Curran Associates, Inc., 2022.
  2. Data mining with decision trees and decision rules. Future generation computer systems, 13(2-3):197–210, 1997.
  3. J. Blank and K. Deb. pymoo: Multi-objective optimization in python. IEEE Access, 8:89497–89509, 2020.
  4. Post-hoc Rule Based Explanations for Black Box Bayesian Optimization, volume CCIS, 1948 of ECAI 2023 International Workshops. Springer, Kraków, Poland, 2024.
  5. Distilling deep reinforcement learning policies in soft decision trees. In Proceedings of the workshop on explainable artificial intelligence, IJCAI Workshop, pages 1–6. IJCAI, 2019.
  6. Extracting tree-structured representations of trained networks. Advances in neural information processing systems, 8, 1995.
  7. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2):182–197, 2002.
  8. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  9. Challenges in drift-diffusion semiconductor simulations. In Robert Klöfkorn, Eirik Keilegavlen, Florin A. Radu, and Jürgen Fuhrmann, editors, Finite Volumes for Complex Applications IX - Methods, Theoretical Aspects, Examples, pages 615–623, Cham, 2020. Springer International Publishing.
  10. Learning to scaffold: Optimizing model explanations for teaching. Advances in Neural Information Processing Systems, 35:36108–36122, 2022.
  11. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42, 2018.
  12. Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research, 24(34):1–11, 2023.
  13. Statistical analysis of a hierarchical clustering algorithm with outliers. Journal of Multivariate Analysis, 192:105075, 2022.
  14. Learning multiple layers of features from tiny images. 2009.
  15. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell., 2(1):56–67, 2020.
  16. Rulexai - A package for rule-based explanations of machine learning model. SoftwareX, 20:101209, 2022.
  17. Sudhanshu K Mishra. Some new test functions for global optimization and performance of repulsive particle swarm method. Available at SSRN 926132, 2006.
  18. Automatic rule extraction from long short term memory networks. arXiv preprint arXiv:1702.02540, 2017.
  19. Algorithms for hierarchical clustering: an overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2(1):86–97, 2012.
  20. Ward’s hierarchical clustering method: Clustering criterion and agglomerative algorithm. CoRR, abs/1111.6285, 2011.
  21. Online auto-tuning method in field-orientation-controlled induction motor driving inertial load. IEEE Open Journal of Industry Applications, 3:125–140, 2022.
  22. From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai. ACM Comput. Surv., 2023.
  23. Data-efficient autotuning with bayesian optimization: An industrial control study. IEEE Transactions on Control Systems Technology, 28(3):730–740, 2019.
  24. Optimizing explanations by network canonization and hyperparameter search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3818–3827, 2023.
  25. Boa: The bayesian optimization algorithm. In Proceedings of the genetic and evolutionary computation conference GECCO-99, volume 1. Citeseer, 1999.
  26. Comprehensive survey on hierarchical clustering algorithms and the recent developments. Artificial Intelligence Review, 56(8):8219–8264, 2023.
  27. Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, 2006.
  28. ”why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the International Conference on Knowledge Discovery and Data Mining, SIGKDD, pages 1135–1144. ACM, 2016.
  29. Accounting for gaussian process imprecision in bayesian optimization. In International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, pages 92–104. Springer, 2022.
  30. Cynthia Rudin. Why black box machine learning should be avoided for high-stakes decisions, in brief. Nature Reviews Methods Primers, 2(1):81, 2022.
  31. Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms. In 16th IEEE international conference on tools with artificial intelligence, pages 576–584. IEEE, 2004.
  32. Sarem Seitz. Gradient-based explanations for gaussian process regression and classification models. arXiv preprint arXiv:2205.12797, 2022.
  33. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2015.
  34. ZK Silagadze. Finding two-dimensional peaks. physics of Particles and Nuclei Letters, 4:73–80, 2007.
  35. Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25, 2012.
  36. Human-in-the-loop assisted de novo molecular design. Journal of Cheminformatics, 14(1):1–16, 2022.
  37. Evaluating XAI: A comparison of rule-based and example-based explanations. Artif. Intell., 291:103404, 2021.
  38. Falling rule lists. In Artificial intelligence and statistics, pages 1013–1022. PMLR, 2015.
  39. Chapter 3 - output: Knowledge representation. In Data Mining (Fourth Edition), pages 67–89. Morgan Kaufmann, fourth edition edition, 2017.
  40. Zixu Wu et al. Prediction of california house price based on multiple linear regression. Academic Journal of Engineering and Technology Science, 3(7.0), 2020.
  41. Uncertainty and sensitivity analysis for models with correlated parameters. Reliability Engineering & System Safety, 93(10):1563–1573, 2008.
  42. A literature survey on association rule mining algorithms. Southeast Europe Journal of soft computing, 5(1), 2016.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets