Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks (2402.02036v2)

Published 3 Feb 2024 in cs.LG

Abstract: Graph Neural Networks (GNNs) have become a building block in graph data processing, with wide applications in critical domains. The growing needs to deploy GNNs in high-stakes applications necessitate explainability for users in the decision-making processes. A popular paradigm for the explainability of GNNs is to identify explainable subgraphs by comparing their labels with the ones of original graphs. This task is challenging due to the substantial distributional shift from the original graphs in the training set to the set of explainable subgraphs, which prevents accurate prediction of labels with the subgraphs. To address it, in this paper, we propose a novel method that generates proxy graphs for explainable subgraphs that are in the distribution of training data. We introduce a parametric method that employs graph generators to produce proxy graphs. A new training objective based on information theory is designed to ensure that proxy graphs not only adhere to the distribution of training data but also preserve explanatory factors. Such generated proxy graphs can be reliably used to approximate the predictions of the labels of explainable subgraphs. Empirical evaluations across various datasets demonstrate our method achieves more accurate explanations for GNNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Autoencoders. Machine learning for data science handbook: data mining and knowledge discovery handbook, pp.  353–374, 2023.
  2. Explaining image classifiers by counterfactual generation. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1MXz20cYQ.
  3. D4explainer: In-distribution explanations of graph neural network via discrete denoising diffusion. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=GJtP1ZEzua.
  4. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.
  5. On the evolution of random graphs. Publ. math. inst. hung. acad. sci, 5(1):17–60, 1960.
  6. Evaluating post-hoc explanations for graph neural networks via robustness analysis. In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.
  7. On regularization for explaining graph neural networks: An information theory perspective, 2023b. URL https://openreview.net/forum?id=5rX7M4wa2R_.
  8. On regularization for explaining graph neural networks: An information theory perspective, 2023c. URL https://openreview.net/forum?id=5rX7M4wa2R_.
  9. Cooperative explanations of graph neural networks. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp.  616–624, 2023d.
  10. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  11. Factorized explainer for graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2024.
  12. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
  13. Derivation and validation of toxicophores for mutagenicity prediction. Journal of medicinal chemistry, 48(1):312–320, 2005.
  14. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  15. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJU4ayYgl.
  16. DAG matters! gflownets enhanced explainer for graph neural networks. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
  17. A survey of explainable graph neural networks: Taxonomy and evaluation metrics. arXiv preprint arXiv:2207.12599, 2022.
  18. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33:19620–19631, 2020.
  19. The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
  20. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pp.  15524–15543. PMLR, 2022.
  21. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10772–10781, 2019.
  22. Generating perturbation-based explanations with robustness to out-of-distribution data. In Proceedings of the ACM Web Conference 2022, pp.  3594–3605, 2022.
  23. Evaluating attribution for graph neural networks. Advances in neural information processing systems, 33:5898–5910, 2020.
  24. Evaluating attribution for graph neural networks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
  25. The graph neural network model. IEEE transactions on neural networks, 20(1):61–80, 2008.
  26. ZINC 15 - ligand discovery for everyone. J. Chem. Inf. Model., 55(11):2324–2337, 2015.
  27. The information bottleneck method. arXiv preprint physics/0004057, 2000.
  28. Graph attention networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.
  29. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  1225–1234, 2016.
  30. Towards multi-grained explainability for graph neural networks. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.  18446–18458, 2021.
  31. A survey of trustworthy graph learning: Reliability, explainability, and privacy protection. arXiv preprint arXiv:2205.10014, 2022.
  32. Graph information bottleneck. Advances in Neural Information Processing Systems, 33:20437–20448, 2020.
  33. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 2019.
  34. Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  35. Trustworthy graph neural networks: Aspects, methods and trends. arXiv preprint arXiv:2205.07424, 2022a.
  36. Regexplainer: Generating explanations for graph neural networks in regression tasks. In The Second Learning on Graphs Conference, 2023a.
  37. Mixupexplainer: Generalizing explanations for graph neural networks with data augmentation. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.  3286–3296, 2023b.
  38. Gstarx: Explaining graph neural networks with structure-aware cooperative games. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022b.
  39. Towards robust fidelity for evaluating explainability of graph neural networks. arXiv preprint arXiv:2310.01820, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.