Papers
Topics
Authors
Recent
Search
2000 character limit reached

Semantic Interpretation and Validation of Graph Attention-based Explanations for GNN Models

Published 8 Aug 2023 in cs.LG, cs.AI, cs.CY, and cs.RO | (2308.04220v2)

Abstract: In this work, we propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to concisely describe complex features and relationships. As traditional explainability methods used in eXplainable AI (XAI) cannot be directly applied to such structures, graph-specific approaches are introduced. Attention has been previously employed to estimate the importance of input features in GDL, however, the fidelity of this method in generating accurate and consistent explanations has been questioned. To evaluate the validity of using attention weights as feature importance indicators, we introduce semantically-informed perturbations and correlate predicted attention weights with the accuracy of the model. Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets and the behaviour of a GNN model, efficiently estimating feature importance. We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance, effectively generating reliable post-hoc semantic explanations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. B. Wu, Y. Bian, H. Zhang, J. Li, J. Yu, L. Chen, C. Chen, and J. Huang, “Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection,” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 4838–4839, 2022.
  2. H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trustworthy Graph Neural Networks: Aspects, Methods and Trends,” pp. 1–36, 2022. [Online]. Available: http://arxiv.org/abs/2205.07424
  3. Y. Fan, Y. Yao, and C. Joe-Wong, “GCN-SE: Attention as Explainability for Node Classification in Dynamic Graphs,” Proceedings - IEEE International Conference on Data Mining, ICDM, vol. 2021-Decem, pp. 1060–1065, 2021.
  4. S. Wiegreffe and Y. Pinter, “Attention is not not explanation,” EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pp. 11–20, 2019.
  5. S. Jain and B. C. Wallace, “Attention is not explanation,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).   Association for Computational Linguistics, Jun. 2019, pp. 3543–3556.
  6. S. Serrano and N. A. Smith, “Is attention interpretable?” ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pp. 2931–2951, 2020.
  7. E. Panagiotaki, D. De Martini, G. Pramatarov, M. Gadd, and L. Kunze, “Sem-gat: Explainable semantic pose estimation using learned graph attention,” arXiv preprint arXiv:2308.03718, 2023.
  8. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
  9. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in Graph Neural Networks: A Taxonomic Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–19, 2022.
  10. F. Baldassarre and H. Azizpour, “Explainability techniques for graph convolutional networks,” in Proceedings of the International Conference on Machine Learning Workshops: Learning Reasoning in Graph-Structured Representations, 2019.
  11. P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, “Explainability methods for graph convolutional neural networks,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 10 764–10 773.
  12. R. Schwarzenberg, M. Hubner, D. Harbecke, C. Alt, and L. Henning, “Layerwise relevance visualization in convolutional text graph classifiers,” arXiv preprint arXiv:1909.10911, 2019.
  13. T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Schutt, K.-R. Muller, and G. Montavon, “Higher-order explanations of graph neural networks via relevant walks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 7581–7596, nov 2022. [Online]. Available: https://doi.org/10.11092Ftpami.2021.3115452
  14. Q. Huang, M. Yamada, Y. Tian, D. Singh, and Y. Chang, “Graphlime: Local interpretable model explanations for graph neural networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 7, pp. 6968–6972, 2023.
  15. Y. Zhang, D. Defazio, and A. Ramesh, “Relex: A model-agnostic relational model explainer,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 1042–1049.
  16. M. Vu and M. T. Thai, “Pgm-explainer: Probabilistic graphical model explanations for graph neural networks,” in Proceedings of the Advances in Neural Information Processing Systems, 2020, pp. 12 225–12 235.
  17. R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating explanations for graph neural networks,” Advances in Neural Information Processing Systems, vol. 32, no. i, pp. 9240–9251, 2019.
  18. D. Luo et al., “Parameterized explainer for graph neural network,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 19 620–19 631.
  19. T. Funke, M. Khosla, and A. Anand, “Zorro: Valid, sparse, and stable explanations in graph neural networks,” arXiv preprint arXiv:2105.08621, 2021.
  20. M. S. Schlichtkrull, N. D. Cao, and I. Titov, “Interpreting graph neural networks for nlp with differentiable edge masking,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=WznmQa42ZAx
  21. X. Wang, Y. Wu, A. Zhang, X. He, and T. seng Chua, “Causal screening to interpret graph neural networks,” 2021. [Online]. Available: https://openreview.net/forum?id=nzKv5vxZfge
  22. H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On explainability of graph neural networks via subgraph explorations,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139.   PMLR, 18–24 Jul 2021, pp. 12 241–12 252.
  23. D. Xu, W. Cheng, D. Luo, X. Liu, and X. Zhang, “Spatio-temporal attentive RNN for node classification in temporal attributed graphs,” IJCAI International Joint Conference on Artificial Intelligence, vol. 2019-Augus, pp. 3947–3953, 2019.
  24. X. Zuo, T. Jia, X. He, B. Yang, and Y. Wang, “Exploiting Dual-Attention Networks for Explainable Recommendation in Heterogeneous Information Networks,” Entropy, vol. 24, no. 12, pp. 1–19, 2022.
  25. B. Rath, X. Morales, and J. Srivastava, “SCARLET: Explainable Attention Based Graph Neural Network for Fake News Spreader Prediction,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12712 LNAI, pp. 714–727, 2021.
  26. B. Wen, K. Subbalakshmi, and F. Yang, “Revisiting attention weights as explanations from an information theoretic perspective,” in NeurIPS ’22 Workshop on All Things Attention: Bridging Different Perspectives on Attention, 2022. [Online]. Available: https://openreview.net/forum?id=H_zAlK3_sZD
  27. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rJXMpikCZ
  28. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, no. iii, pp. 9296–9306, 2019.
Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.