Papers
Topics
Authors
Recent
2000 character limit reached

Verifying Relational Explanations: A Probabilistic Approach

Published 5 Jan 2024 in cs.AI | (2401.02703v1)

Abstract: Explanations on relational data are hard to verify since the explanation structures are more complex (e.g. graphs). To verify interpretable explanations (e.g. explanations of predictions made in images, text, etc.), typically human subjects are used since it does not necessarily require a lot of expertise. However, to verify the quality of a relational explanation requires expertise and is hard to scale-up. GNNExplainer is arguably one of the most popular explanation methods for Graph Neural Networks. In this paper, we develop an approach where we assess the uncertainty in explanations generated by GNNExplainer. Specifically, we ask the explainer to generate explanations for several counterfactual examples. We generate these examples as symmetric approximations of the relational structure in the original data. From these explanations, we learn a factor graph model to quantify uncertainty in an explanation. Our results on several datasets show that our approach can help verify explanations from GNNExplainer by reliably estimating the uncertainty of a relation specified in the explanation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  2. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” 2018.
  3. D. Gunning, “Darpa’s explainable artificial intelligence (XAI) program,” in ACM Conference on Intelligent User Interfaces, 2019.
  4. M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in KDD, 2016, pp. 1135–1144.
  5. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in NeurIPS, vol. 30, 2017.
  6. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” in NeurIPS, vol. 32, 2019.
  7. F. Ball and A. Geyer-Schulz, “How symmetric are real-world graphs? a large-scale study,” Symmetry, vol. 10, 2018.
  8. G. Van den Broeck and A. Darwiche, “On the complexity and approximation of binary evidence in lifted inference,” in NeurIPS, vol. 26, 2013.
  9. H.-A. Loeliger, “An introduction to factor graphs,” IEEE Signal Processing Magazine, vol. 21, pp. 28–41, 2004.
  10. J. S. Yedidia, W. Freeman, and Y. Weiss, “Generalized belief propagation,” in NeurIPS, vol. 13, 2000.
  11. B. Mittelstadt, C. Russell, and S. Wachter, “Explaining explanations in ai,” in Proceedings of FAT’19, 2019, p. 279–288.
  12. “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019.
  13. T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Schütt, K.-R. Müller, and G. Montavon, “Higher-order explanations of graph neural networks via relevant walks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, pp. 7581–7596, 2022.
  14. Q. Huang, M. Yamada, Y. Tian, D. Singh, and Y. Chang, “Graphlime: Local interpretable model explanations for graph neural networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, pp. 6968–6972, 2023.
  15. A. Shakya, A. T. Magar, S. Sarkhel, and D. Venugopal, “On the verification of embeddings with hybrid markov logic,” in Proceedings of IEEE ICDM, 2023.
  16. M. Vu and M. T. Thai, “Pgm-explainer: Probabilistic graphical model explanations for graph neural networks,” in NeurIPS, vol. 33, 2020, pp. 12 225–12 235.
  17. L. Faber, A. K. Moghaddam, and R. Wattenhofer, “When comparing to ground truth is wrong: On evaluating gnn explanation methods,” in KDD, 2021, p. 332–341.
  18. B. Sanchez-Lengeling, J. Wei, B. Lee, E. Reif, P. Wang, W. Qian, K. McCloskey, L. Colwell, and A. Wiltschko, “Evaluating attribution for graph neural networks,” in NeurIPS, vol. 33, 2020, pp. 5898–5910.
  19. C. Wan, W. Chang, T. Zhao, M. Li, S. Cao, and C. Zhang, “Fast and efficient boolean matrix factorization by geometric segmentation,” in AAAI, 2020, pp. 6086–6093.
  20. M. Žitnik and B. Zupan, “Nimfa : A python library for nonnegative matrix factorization,” JMLR, vol. 13, no. 30, pp. 849–853, 2012.
  21. P. Singla and P. Domingos, “Discriminative Training of Markov Logic Networks,” in AAAI, 2005, pp. 868–873.
  22. I. Sutskever and T. Tieleman, “On the convergence properties of contrastive divergence,” in AISTATS, vol. 9, 2010, pp. 789–795.
  23. J. S. Yedidia, W. T. Freeman, and Y. Weiss, “Generalized Belief Propagation,” in NeurIPS, 2001, pp. 689–695.
  24. Q. McNemar, “Note on the sampling error of the difference between correlated proportions or percentages,” Psychometrika, vol. 12, no. 2, pp. 153–157, 1947.
  25. T. G. Dietterich, “Approximate statistical tests for comparing supervised classification learning algorithms,” Neural computation, vol. 10, no. 7, pp. 1895–1923, 1998.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.