Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Differential Geometric View and Explainability of GNN on Evolving Graphs (2403.06425v1)

Published 11 Mar 2024 in cs.LG and cs.AI

Abstract: Graphs are ubiquitous in social networks and biochemistry, where Graph Neural Networks (GNN) are the state-of-the-art models for prediction. Graphs can be evolving and it is vital to formally model and understand how a trained GNN responds to graph evolution. We propose a smooth parameterization of the GNN predicted distributions using axiomatic attribution, where the distributions are on a low-dimensional manifold within a high-dimensional embedding space. We exploit the differential geometric viewpoint to model distributional evolution as smooth curves on the manifold. We reparameterize families of curves on the manifold and design a convex optimization problem to find a unique curve that concisely approximates the distributional evolution for human interpretation. Extensive experiments on node classification, link prediction, and graph classification tasks with evolving graphs demonstrate the better sparsity, faithfulness, and intuitiveness of the proposed method over the state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Shun-ichi Amari. Information Geometry and Its Applications. Springer Publishing Company, Incorporated, 1st edition, 2016. ISBN 4431559779.
  2. On the distance between two neural networks and the stability of learning. In Advances in Neural Information Processing Systems, volume 33, pp.  21370–21381, 2020.
  3. A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization With Applications. IEEE Transactions on Neural Networks and Learning Systems, 31(11):4776–4790, 2020. doi: 10.1109/TNNLS.2019.2957843.
  4. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34 2:786–97, 1991. URL https://api.semanticscholar.org/CorpusID:19990980.
  5. Cvxpy: A python-embedded modeling language for convex optimization. Journal of machine learning research : JMLR, 17, 2016. URL https://api.semanticscholar.org/CorpusID:6298008.
  6. Universal statistics of Fisher information in deep neural networks: mean field approach. Journal of Statistical Mechanics: Theory and Experiment, 2020(12):124005, dec 2020.
  7. Representation Learning for Dynamic Graphs: A Survey. J. Mach. Learn. Res., 2020.
  8. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  9. Predicting dynamic embedding trajectory in temporal interaction networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, KDD, pp.  1269–1278. Association for Computing Machinery, 2019.
  10. Limitations of the empirical Fisher approximation for natural gradient descent. In Advances in Neural Information Processing Systems, 2019.
  11. Guy Lebanon. Learning riemannian metrics. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI, 2002.
  12. Graph evolution: Densification and shrinking diameters. ACM Trans. Knowl. Discov. Data, 1(1), 2007. ISSN 1556-4681.
  13. Microscopic evolution of social networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD, pp.  462–470. Association for Computing Machinery, 2008.
  14. DIG: A turnkey library for diving into graph deep learning research. arXiv preprint arXiv:2103.12608, 2021.
  15. Cf-gnnexplainer: Counterfactual explanations for graph neural networks. ArXiv, abs/2102.03322, 2021.
  16. Rumor detection on twitter with tree-structured recursive neural networks. In Annual Meeting of the Association for Computational Linguistics, 2018. URL https://api.semanticscholar.org/CorpusID:51878172.
  17. James Martens. New insights and perspectives on the natural gradient method. Journal of Machine Learning Research, 21(146):1–76, 2020. URL http://jmlr.org/papers/v21/17-678.html.
  18. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. URL www.graphlearning.io.
  19. f-GANs in an Information Geometric Nutshell. In Advances in Neural Information Processing Systems, volume 30, 2017.
  20. Large-Scale Distributed Second-Order Optimization Using Kronecker-Factored Approximate Curvature for Deep Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), jun 2019.
  21. Explainability Methods for Graph Convolutional Neural Networks. In CVPR, 2019.
  22. Collective opinion spam detection: Bridging review networks and metadata. In KDD, 2015.
  23. Interpreting and Disentangling Feature Components of Various Complexity from DNNs. In ICML, 2021.
  24. Temporal graph networks for deep learning on dynamic graphs. ArXiv, abs/2006.10637, 2020. URL https://api.semanticscholar.org/CorpusID:219792342.
  25. Evaluating Attribution for Graph Neural Networks. In NeurIPS, 2020.
  26. Higher-order explanations of graph neural networks via relevant walks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44:7581–7596, 2020. URL https://api.semanticscholar.org/CorpusID:227225626.
  27. Lower Bounds on the Generalization Error of Nonlinear Learning Models. IEEE Transactions on Information Theory, pp.  1, 2022. doi: 10.1109/TIT.2022.3189760.
  28. The riemannian geometry of deep generative models. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018.
  29. Learning Important Features Through Propagating Activation Differences. In ICML, 2017.
  30. Alexander Soen and Ke Sun. On the Variance of the Fisher Information for Deep Learning. In Advances in Neural Information Processing Systems, volume 34, 2021.
  31. Henri Jacques Suermondt. Explanation in Bayesian belief networks. Stanford University, 1992.
  32. Minh N Vu and M Thai. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks. In NeurIPS, 2020.
  33. FdGars: Fraudster Detection via Graph Convolutional Networks in Online App Review System. In WWW, 2019.
  34. Moleculenet: A benchmark for molecular machine learning. arXiv: Learning, 2017. URL https://api.semanticscholar.org/CorpusID:217680306.
  35. Temporal knowledge graph completion based on time series gaussian embedding. In SEMWEB, 2020a.
  36. Inductive representation learning on temporal graphs. In ICLR, 2020b.
  37. Learning dynamics via graph neural networks for human pose estimation and tracking. CVPR, 2021.
  38. How to build a graph-based deep learning architecture in traffic domain: A survey. ArXiv, abs/2005.11691, 2020.
  39. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In KDD, 2018.
  40. GNNExplainer: Generating Explanations for Graph Neural Networks. In NeurIPS, 2019.
  41. Graph convolutional policy network for goal-directed molecular graph generation. In Advances in Neural Information Processing Systems, volume 31, 2018.
  42. Explainability in Graph Neural Networks: A Taxonomic Survey. ArXiv, abs/2012.15445, 2020a.
  43. On explainability of graph neural networks via subgraph explorations. In ICML, 2021.
  44. Xgnn: Towards model-level explanations of graph neural networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020b. URL https://api.semanticscholar.org/CorpusID:219305237.
  45. Exploiting context for rumour detection in social media. In ICSI, pp.  109–123, 2017.
Citations (3)

Summary

We haven't generated a summary for this paper yet.