Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the Language of Motifs (2202.08815v2)

Published 17 Feb 2022 in cs.LG and cs.AI

Abstract: Most methods for explaining black-box classifiers (e.g. on tabular data, images, or time series) rely on measuring the impact that removing/perturbing features has on the model output. This forces the explanation language to match the classifier's feature space. However, when dealing with graph data, in which the basic features correspond to the edges describing the graph structure, this matching between features space and explanation language might not be appropriate. Decoupling the feature space (edges) from a desired high-level explanation language (such as motifs) is thus a major challenge towards developing actionable explanations for graph classification tasks. In this paper we introduce GRAPHSHAP, a Shapley-based approach able to provide motif-based explanations for identity-aware graph classifiers, assuming no knowledge whatsoever about the model or its training data: the only requirement is that the classifier can be queried as a black-box at will. For the sake of computational efficiency we explore a progressive approximation strategy and show how a simple kernel can efficiently approximate explanation scores, thus allowing GRAPHSHAP to scale on scenarios with a large explanation space (i.e. large number of motifs). We showcase GRAPHSHAP on a real-world brain-network dataset consisting of patients affected by Autism Spectrum Disorder and a control group. Our experiments highlight how the classification provided by a black-box model can be effectively explained by few connectomics patterns.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. B. Shneiderman, “Human-centered artificial intelligence: Reliable, safe & trustworthy,” International Journal of Human–Computer Interaction, vol. 36, no. 6, pp. 495–504, 2020.
  2. U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M. Moura, and P. Eckersley, “Explainable machine learning in deployment,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 648–657.
  3. C. Panigutti, A. Perotti, A. Panisson, P. Bajardi, and D. Pedreschi, “Fairlens: Auditing black-box clinical decision support systems,” Information Processing & Management, vol. 58, no. 5, p. 102657, 2021.
  4. S. Tan, R. Caruana, G. Hooker, and Y. Lou, “Distill-and-compare: Auditing black-box models using transparent model distillation,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 303–310.
  5. R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM computing surveys (CSUR), vol. 51, no. 5, pp. 1–42, 2018.
  6. L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA).   IEEE, 2018, pp. 80–89.
  7. M. T. Ribeiro, S. Singh, and C. Guestrin, ““why should i trust you?” explaining the predictions of any classifier,” in Proc. of the 22nd ACM SIGKDD International Conf. on Knowledge Discovery & Data Mining, 2016, pp. 1135–1144.
  8. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, ser. NIPS’17, 2017, p. 4768–4777.
  9. V. Buhrmester, D. Münch, and M. Arens, “Analysis of explainers of black box deep neural networks for computer vision: A survey,” Machine Learning and Knowledge Extraction, vol. 3, no. 4, pp. 966–989, 2021.
  10. R. Assaf and A. Schumann, “Explainable deep neural networks for multivariate time series predictions.” in IJCAI, 2019, pp. 6488–6490.
  11. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI Open, vol. 1, pp. 57–81, 2020.
  12. J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” in International Conference on Machine Learning.   PMLR, 2018, pp. 883–892.
  13. L. Gutiérrez-Gómez and J.-C. Delvenne, “Unsupervised network embeddings with node identity awareness,” Applied Network Science, vol. 4, no. 1, pp. 1–21, 2019.
  14. J. You, J. M. Gomes-Selman, R. Ying, and J. Leskovec, “Identity-aware graph neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 737–10 745.
  15. T. Lanciano, F. Bonchi, and A. Gionis, “Explainable classification of brain networks via contrast subgraphs,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 3308–3318.
  16. C. Abrate and F. Bonchi, “Counterfactual graphs for explainable classification of brain networks,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD ’21.   NY, USA: Association for Computing Machinery, 2021, p. 2495–2504.
  17. L. S. Shapley, “A value for n-person games,” Contributions to the Theory of Games, vol. 2, no. 28, pp. 307–317, 1953.
  18. E. Štrumbelj and I. Kononenko, “Explaining prediction models and individual predictions with feature contributions,” Knowledge and information systems, vol. 41, no. 3, pp. 647–665, 2014.
  19. M. Sundararajan and A. Najmi, “The many shapley values for model explanation,” in International Conference on Machine Learning.   PMLR, 2020, pp. 9269–9278.
  20. R. Mitchell, J. Cooper, E. Frank, and G. Holmes, “Sampling permutations for shapley value estimation,” Journal of Machine Learning Research, vol. 23, no. 43, pp. 1–46, 2022. [Online]. Available: http://jmlr.org/papers/v23/21-0439.html
  21. H. Chen, J. D. Janizek, S. Lundberg, and S.-I. Lee, “True to the model or true to the data?” arXiv preprint arXiv:2006.16234, 2020.
  22. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in graph neural networks: A taxonomic survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  23. F. Baldassarre and H. Azizpour, “Explainability techniques for graph convolutional networks,” arXiv preprint arXiv:1905.13686, 2019.
  24. P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, “Explainability methods for graph convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 10 772–10 781.
  25. T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Schutt, K.-R. Muller, and G. Montavon, “Higher-order explanations of graph neural networks via relevant walks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, pp. 7581–7596, 2020.
  26. D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen, and X. Zhang, “Parameterized explainer for graph neural network,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Curran Associates Inc., 2020.
  27. R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, p. 9240, 2019.
  28. M. S. Schlichtkrull, N. De Cao, and I. Titov, “Interpreting graph neural networks for nlp with differentiable edge masking,” in International Conference on Learning Representations, 2020.
  29. H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On explainability of graph neural networks via subgraph explorations,” in International Conference on Machine Learning.   PMLR, 2021, pp. 12 241–12 252.
  30. Q. Huang, M. Yamada, Y. Tian, D. Singh, D. Yin, and Y. Chang, “Graphlime: Local interpretable model explanations for graph neural networks,” arXiv preprint arXiv:2001.06216, 2020.
  31. Y. Zhang, D. Defazio, and A. Ramesh, “Relex: A model-agnostic relational model explainer,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 1042–1049.
  32. M. Vu and M. Thai, “Pgm-explainer: Probabilistic graphical model explanations for graph neural networks,” in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020.
  33. M. Koutrouli, E. Karatzas, D. Paez-Espino, and G. A. Pavlopoulos, “A guide to conquer the biological network era using graph theory,” Frontiers in bioengineering and biotechnology, vol. 8, p. 34, 2020. [Online]. Available: https://europepmc.org/articles/PMC7004966
  34. Y. Lai, B. Wu, L. Chen, and H. Zhao, “A statistical method for identifying differential gene–gene co-expression patterns,” Bioinformatics, vol. 20, no. 17, pp. 3146–3155, 07 2004.
  35. R. K. Nibbe, S. A. Chowdhury, M. Koyutürk, R. Ewing, and M. R. Chance, “Protein–protein interaction networks in the biology of disease,” WIREs Systems Biology and Medicine, vol. 3, no. 3, pp. 357–367, 2011.
  36. G. Gulfidan, B. Turanli, H. Beklen, R. Sinha, and K. Y. Arga, “Pan-cancer mapping of differential protein-protein interactions,” Scientific reports, vol. 10, no. 1, pp. 1–12, 2020.
  37. Y. Kim, J. Hao, Y. Gautam, T. B. Mersha, and M. Kang, “Diffgrn: differential gene regulatory network analysis,” International journal of data mining and bioinformatics, vol. 20, no. 4, p. 362, 2018.
  38. A. J. Singh, S. A. Ramsey, T. M. Filtz, and C. Kioussi, “Differential gene regulatory networks in development and disease,” Cellular and Molecular Life Sciences, vol. 75, no. 6, pp. 1013–1025, 2018.
  39. Y. Shi, Z. Huang, S. Feng, H. Zhong, W. Wang, and Y. Sun, “Masked label prediction: Unified message passing model for semi-supervised classification,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021, pp. 1548–1554.
  40. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning.   PMLR, 2015, pp. 448–456.
  41. C. Craddock, Y. Benhajali, C. Chu, F. Chouinard, A. Evans, A. Jakab, B. S. Khundrakpam, J. D. Lewis, Q. Li, M. Milham et al., “The neuro bureau preprocessing initiative: open sharing of preprocessed neuroimaging data and derivatives,” Frontiers in Neuroinformatics, vol. 7, 2013.
  42. R. S. Desikan, F. Ségonne, B. Fischl, B. T. Quinn, B. C. Dickerson, D. Blacker, R. L. Buckner, A. M. Dale, R. P. Maguire, B. T. Hyman et al., “An automated labeling system for subdividing the human cerebral cortex on mri scans into gyral based regions of interest,” Neuroimage, vol. 31, no. 3, pp. 968–980, 2006.
Citations (7)

Summary

We haven't generated a summary for this paper yet.