Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A PSO Based Method to Generate Actionable Counterfactuals for High Dimensional Data (2311.12825v2)

Published 30 Sep 2023 in cs.AI, cs.LG, and stat.ME

Abstract: Counterfactual explanations (CFE) are methods that explain a machine learning model by giving an alternate class prediction of a data point with some minimal changes in its features. It helps the users to identify their data attributes that caused an undesirable prediction like a loan or credit card rejection. We describe an efficient and an actionable counterfactual (CF) generation method based on particle swarm optimization (PSO). We propose a simple objective function for the optimization of the instance-centric CF generation problem. The PSO brings in a lot of flexibility in terms of carrying out multi-objective optimization in large dimensions, capability for multiple CF generation, and setting box constraints or immutability of data attributes. An algorithm is proposed that incorporates these features and it enables greater control over the proximity and sparsity properties over the generated CFs. The proposed algorithm is evaluated with a set of action-ability metrics in real-world datasets, and the results were superior compared to that of the state-of-the-arts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. P. Voigt and A. Von dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed., Cham: Springer International Publishing, vol. 10, p. 3152676, 2017.
  2. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st international conference on neural information processing systems, 2017, pp. 4768–4777.
  3. M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.
  4. N. J. Roese, “Counterfactual thinking.” Psychological bulletin, vol. 121, no. 1, p. 133, 1997.
  5. M. J. Kusner, J. Loftus, C. Russell, and R. Silva, “Counterfactual fairness,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30.   Curran Associates, Inc., 2017.
  6. S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the gdpr,” Harv. JL & Tech., vol. 31, p. 841, 2017.
  7. R. K. Mothilal, A. Sharma, and C. Tan, “Explaining machine learning classifiers through diverse counterfactual explanations,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 607–617.
  8. T. Laugel, M.-J. Lesot, C. Marsala, X. Renard, and M. Detyniecki, “Inverse classification for comparison-based interpretability in machine learning,” arXiv preprint arXiv:1712.08443, 2017.
  9. R. Poyiadzi, K. Sokol, R. Santos-Rodriguez, T. De Bie, and P. Flach, “Face: Feasible and actionable counterfactual explanations,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 344–350.
  10. Y.-L. Chou, C. Moreira, P. Bruza, C. Ouyang, and J. Jorge, “Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications,” Information Fusion, vol. 81, pp. 59–83, 2022.
  11. S. Dandl, C. Molnar, M. Binder, and B. Bischl, “Multi-objective counterfactual explanations,” in International Conference on Parallel Problem Solving from Nature.   Springer, 2020, pp. 448–469.
  12. A. V. Looveren and J. Klaise, “Interpretable counterfactual explanations guided by prototypes,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases.   Springer, 2021, pp. 650–665.
  13. A. Artelt and B. Hammer, “On the computation of counterfactual explanations–a survey,” arXiv preprint arXiv:1911.07749, 2019.
  14. M. Downs, J. L. Chu, Y. Yacoby, F. Doshi-Velez, and W. Pan, “Cruds: Counterfactual recourse using disentangled subspaces,” in ICML Workshop on Human Interpretability in Machine Learning, 2020.
  15. A. Dhurandhar, P.-Y. Chen, R. Luss, C.-C. Tu, P. Ting, K. Shanmugam, and P. Das, “Explanations based on the missing: Towards contrastive explanations with pertinent negatives,” arXiv preprint arXiv:1802.07623, 2018.
  16. J. Antorán, U. Bhatt, T. Adel, A. Weller, and J. M. Hernández-Lobato, “Getting a clue: A method for explaining uncertainty estimates,” 2021.
  17. S. Joshi, O. Koyejo, W. Vijitbenjaronk, B. Kim, and J. Ghosh, “Towards realistic individual recourse and actionable explanations in black-box decision making systems,” arXiv preprint arXiv:1907.09615, 2019.
  18. S. Chowdhury, W. Tong, A. Messac, and J. Zhang, “A mixed-discrete particle swarm optimization algorithm with explicit diversity-preservation,” Structural and Multidisciplinary Optimization, vol. 47, no. 3, pp. 367–388, 2013.
  19. J. Dressel and H. Farid, “The accuracy, fairness, and limits of predicting recidivism,” Science advances, vol. 4, no. 1, p. eaao5580, 2018.
  20. H. A. Guvenir, B. Acar, G. Demiroz, and A. Cekin, “A supervised machine learning algorithm for arrhythmia analysis,” in Computers in Cardiology 1997.   IEEE, 1997, pp. 433–436. [Online]. Available: https://archive.ics.uci.edu/ml/datasets/arrhythmia
  21. Hill-valley data set. [Online]. Available: https://https://archive.ics.uci.edu/ml/datasets/hill-valley
  22. A. Tsanas, M. A. Little, C. Fox, and L. O. Ramig, “Objective automatic assessment of rehabilitative speech treatment in parkinson’s disease,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 22, no. 1, pp. 181–190, 2013. [Online]. Available: https://www.openml.org/d/1484
  23. M. Pawelczyk, S. Bielawski, J. Van den Heuvel, T. Richter, and G. Kasneci, “Carla: A python library to benchmark algorithmic recourse and counterfactual explanation algorithms,” 2021.
  24. S. Shekhar, A. Bansode, and A. Salim, “A comparative study of hyper-parameter optimization tools,” in 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), 2021, pp. 1–6.
  25. Q. Shi, Y.-L. Zhang, L. Li, X. Yang, M. Li, and J. Zhou, “Safe: Scalable automatic feature engineering framework for industrial tasks,” in 2020 IEEE 36th International Conference on Data Engineering (ICDE).   IEEE, 2020, pp. 1645–1656.

Summary

We haven't generated a summary for this paper yet.