Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Prototype-based Graph Information Bottleneck (2310.19906v2)

Published 30 Oct 2023 in cs.LG and cs.AI

Abstract: The success of Graph Neural Networks (GNNs) has led to a need for understanding their decision-making process and providing explanations for their predictions, which has given rise to explainable AI (XAI) that offers transparent explanations for black-box models. Recently, the use of prototypes has successfully improved the explainability of models by learning prototypes to imply training graphs that affect the prediction. However, these approaches tend to provide prototypes with excessive information from the entire graph, leading to the exclusion of key substructures or the inclusion of irrelevant substructures, which can limit both the interpretability and the performance of the model in downstream tasks. In this work, we propose a novel framework of explainable GNNs, called interpretable Prototype-based Graph Information Bottleneck (PGIB) that incorporates prototype learning within the information bottleneck framework to provide prototypes with the key subgraph from the input graph that is important for the model prediction. This is the first work that incorporates prototype learning into the process of identifying the key subgraphs that have a critical impact on the prediction performance. Extensive experiments, including qualitative analysis, demonstrate that PGIB outperforms state-of-the-art methods in terms of both prediction performance and explainability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1):i47–i56, 2005.
  2. This looks like that: deep learning for interpretable image recognition. Advances in neural information processing systems, 32, 2019.
  3. Discovering symbolic models from deep learning with inductive biases. In Advances in Neural Information Processing Systems, pages 17429–17442, 2020.
  4. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.
  5. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771–783, 2003.
  6. F. Doshi-Velez and B. Kim. A roadmap for a rigorous science of interpretability. arXiv preprint arXiv:1702.08608, 2(1), 2017.
  7. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
  8. Zinc- a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45(1):177–182, 2005.
  9. Quantifying explainers of graph neural networks in computational pathology. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8106–8116, 2021.
  10. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  11. Conditional graph information bottleneck for molecular relational learning. arXiv preprint arXiv:2305.01520, 2023.
  12. Parameterized explainer for graph neural network. In Advances in Neural Information Processing Systems, pages 19620–19631, 2020.
  13. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pages 15524–15543. PMLR, 2022.
  14. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10772–10781, 2019.
  15. C. Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206–215, 2019.
  16. Fast and accurate modeling of molecular atomization energies with machine learning. Physical review letters, 108(5):058301, 2012.
  17. Protopshare: Prototypical parts sharing for similarity discovery in interpretable image classification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1420–1430, 2021.
  18. Mastering the game of go without human knowledge. nature, 550(7676):354–359, 2017.
  19. Graph structure learning with variational information bottleneck. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4165–4174, 2022.
  20. Contrastive multiview coding. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pages 776–794. Springer, 2020.
  21. The information bottleneck method. arXiv preprint physics/0004057, 2000.
  22. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  23. M. Vu and M. T. Thai. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. In Advances in Neural Information Processing Systems, pages 12225–12235, 2020.
  24. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14:347–375, 2008.
  25. T. Wang. Gaining free or low-cost interpretability with interpretable partial substitute. In International Conference on Machine Learning, pages 6505–6514. PMLR, 2019.
  26. J. Wencel-Delord and F. Glorius. C–h bond activation enables the rapid construction and late-stage diversification of functional molecules. Nature chemistry, 5(5):369–375, 2013.
  27. Graph information bottleneck. Advances in Neural Information Processing Systems, 33:20437–20448, 2020.
  28. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
  29. Gnnexplainer: Generating explanations for graph neural networks. In Advances in Neural Information Processing Systems, pages 9240–9251, 2019.
  30. Graph contrastive learning with augmentations. Advances in neural information processing systems, 33:5812–5823, 2020.
  31. Graph information bottleneck for subgraph recognition. In International Conference on Learning Representations, 2021.
  32. Improving subgraph recognition with variational graph information bottleneck. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19396–19405, 2022.
  33. Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 430–438, 2020.
  34. Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  35. Protgnn: Towards self-explaining graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 9127–9135, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sangwoo Seo (7 papers)
  2. Sungwon Kim (32 papers)
  3. Chanyoung Park (83 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.