Papers
Topics
Authors
Recent
Search
2000 character limit reached

GOAt: Explaining Graph Neural Networks via Graph Output Attribution

Published 26 Jan 2024 in cs.LG | (2401.14578v1)

Abstract: Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1):1–9, 2020.
  2. Global explainability of GNNs via logic combination of learned concepts. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=OTbRTIY4YS.
  3. Robust counterfactual explanations on graph neural networks. Advances in Neural Information Processing Systems, 34:5644–5655, 2021.
  4. Explainability techniques for graph convolutional networks. In International Conference on Machine Learning (ICML) Workshops, 2019 Workshop on Learning and Reasoning with Graph-Structured Representations, 2019.
  5. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  6. Layer-wise relevance propagation for neural networks with local renormalization layers. In Artificial Neural Networks and Machine Learning–ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II 25, pp.  63–71. Springer, 2016.
  7. When comparing to ground truth is wrong: On evaluating gnn explanation methods. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp.  332–341, 2021.
  8. Degree: Decomposition based explanation for graph neural networks. In International Conference on Learning Representations, 2022.
  9. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  10. Graphlime: Local interpretable model explanations for graph neural networks. IEEE Transactions on Knowledge and Data Engineering, 2022.
  11. Global counterfactual explainer for graph neural networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp.  141–149, 2023.
  12. Explaining convolutional neural networks using softmax gradient layer-wise relevance propagation. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp.  4176–4185, 2019. doi: 10.1109/ICCVW.2019.00513.
  13. Derivation and validation of toxicophores for mutagenicity prediction. Journal of medicinal chemistry, 48(1):312–320, 2005.
  14. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  15. Generative causal explanations for graph neural networks. In International Conference on Machine Learning, pp.  6666–6679. PMLR, 2021.
  16. Cf-gnnexplainer: Counterfactual explanations for graph neural networks. In International Conference on Artificial Intelligence and Statistics, pp.  4499–4511. PMLR, 2022.
  17. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
  18. Amalnet: A deep learning framework based on graph convolutional networks for malware detection. Computers & Security, 93:101792, 2020.
  19. pkcsm: predicting small-molecule pharmacokinetic and toxicity properties using graph-based signatures. Journal of medicinal chemistry, 58(9):4066–4072, 2015.
  20. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10772–10781, 2019.
  21. Interpreting graph neural networks for nlp with differentiable edge masking. In International Conference on Learning Representations, 2021.
  22. Higher-order explanations of graph neural networks via relevant walks. IEEE transactions on pattern analysis and machine intelligence, 44(11):7581–7596, 2021.
  23. Reinforcement learning enhanced explainer for graph neural networks. Advances in Neural Information Processing Systems, 34:22523–22533, 2021.
  24. Learning important features through propagating activation differences. In International conference on machine learning, pp.  3145–3153. PMLR, 2017.
  25. Axiomatic attribution for deep networks. In International conference on machine learning, pp.  3319–3328. PMLR, 2017.
  26. Learning and evaluating graph neural network explanations based on counterfactual and factual reasoning. In Proceedings of the ACM Web Conference 2022, pp.  1018–1027, 2022.
  27. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. Advances in neural information processing systems, 33:12225–12235, 2020.
  28. Temporal-aware graph neural network for credit risk prediction. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp.  702–710. SIAM, 2021.
  29. Graph Neural Networks: Foundations, Frontiers, and Applications. Springer Nature, 2022.
  30. Task-agnostic graph explanations. Advances in Neural Information Processing Systems, 35:12027–12039, 2022.
  31. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
  32. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
  33. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, pp.  12241–12252. PMLR, 2021.
  34. Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  35. Gstarx: Explaining graph neural networks with structure-aware cooperative games. Advances in Neural Information Processing Systems, 35:19810–19823, 2022.
  36. Relex: A model-agnostic relational model explainer. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp.  1042–1049, 2021.
  37. Towards faithful and consistent explanations for graph neural networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp.  634–642, 2023.
Citations (4)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.