Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graphtester: Exploring Theoretical Boundaries of GNNs on Graph Datasets (2306.17482v1)

Published 30 Jun 2023 in cs.LG and cs.AI

Abstract: Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from graph-structured data. However, even state-of-the-art architectures have limitations on what structures they can distinguish, imposing theoretical limits on what the networks can achieve on different datasets. In this paper, we provide a new tool called Graphtester for a comprehensive analysis of the theoretical capabilities of GNNs for various datasets, tasks, and scores. We use Graphtester to analyze over 40 different graph datasets, determining upper bounds on the performance of various GNNs based on the number of layers. Further, we show that the tool can also be used for Graph Transformers using positional node encodings, thereby expanding its scope. Finally, we demonstrate that features generated by Graphtester can be used for practical applications such as Graph Transformers, and provide a synthetic dataset to benchmark node and edge features, such as positional encodings. The package is freely available at the following URL: https://github.com/meakbiyik/graphtester.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Barthélemy, M. Spatial networks. Physics Reports, 499(1-3):1–101, 2011.
  2. Improving graph neural network expressivity via subgraph isomorphism counting. CoRR, abs/2006.09252, 2020. URL https://arxiv.org/abs/2006.09252.
  3. Brandes, U. A faster algorithm for betweenness centrality. The Journal of Mathematical Sociology, 25, 03 2004. doi: 10.1080/0022250X.2001.9990249.
  4. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  5. Strongly Regular Graphs. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2022. doi: 10.1017/9781009057226.
  6. An optimal lower bound on the number of variables for graph identification. In 30th Annual Symposium on Foundations of Computer Science, pp.  612–617, 1989. doi: 10.1109/SFCS.1989.63543.
  7. Can graph neural networks count substructures? Advances in neural information processing systems, 33:10383–10395, 2020.
  8. The igraph software package for complex network research. InterJournal, Complex Systems:1695, 2006. URL https://igraph.org.
  9. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771–783, 2003.
  10. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, volume 28, pp.  2224–2232, 2015.
  11. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020.
  12. Benchmarking graph neural networks. In Advances in Neural Information Processing Systems, volume 33, 2020.
  13. Graph neural networks with learnable structural and positional representations. arXiv preprint arXiv:2110.07875, 2021.
  14. Benchmarking graph neural networks, 2022.
  15. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
  16. Betweenness-selfcentric graphs. Betweenness-selfcentric graphs, Apr 2012. URL http://hdl.handle.net/2117/15768.
  17. Exploring network structure, dynamics, and function using networkx. 1 2008. URL https://www.osti.gov/biblio/960616.
  18. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024–1034, 2017.
  19. Strategies for pre-training graph neural networks, 2019. URL https://arxiv.org/abs/1905.12265.
  20. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020.
  21. Describing Graphs: A First-Order Approach to Graph Canonization, pp.  59–81. Springer New York, New York, NY, 1990. ISBN 978-1-4612-4478-3. doi: 10.1007/978-1-4612-4478-3˙5. URL https://doi.org/10.1007/978-1-4612-4478-3_5.
  22. Engineering an efficient canonical labeling tool for large and sparse graphs. In Proceedings of the Meeting on Algorithm Engineering & Expermiments, pp.  135–149, USA, 2007. Society for Industrial and Applied Mathematics.
  23. Kiefer, S. Power and limits of the Weisfeiler-Leman algorithm. Dissertation, RWTH Aachen University, Aachen, 2020. URL https://publications.rwth-aachen.de/record/785831. Veröffentlicht auf dem Publikationsserver der RWTH Aachen University; Dissertation, RWTH Aachen University, 2020.
  24. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  25. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34:21618–21629, 2021.
  26. Adaptive embedding dimension selection in graph neural networks. In Advances in Neural Information Processing Systems, volume 31, pp.  7458–7468, 2018.
  27. Provably powerful graph networks. In Advances in Neural Information Processing Systems, volume 33, 2020.
  28. Image-based recommendations on styles and substitutes, 2015.
  29. Practical graph isomorphism, ii, 2013. URL https://arxiv.org/abs/1301.1493.
  30. Weisfeiler and leman go neural: Higher-order graph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33:4602–4609, 2019.
  31. Tudataset: A collection of benchmark datasets for learning with graphs. CoRR, abs/2007.08663, 2020. URL https://arxiv.org/abs/2007.08663.
  32. Query-driven active surveying for collective classification. In Workshop on Mining and Learning with Graphs, 2012.
  33. Newman, M. E. The structure and function of complex networks. SIAM review, 45(2):167–256, 2003.
  34. A theoretical comparison of graph neural network extensions. In International Conference on Machine Learning, pp. 17323–17345. PMLR, 2022.
  35. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems, 35:14501–14515, 2022.
  36. Discriminating power of centrality measures. CoRR, abs/1305.3146, 2013. URL http://arxiv.org/abs/1305.3146.
  37. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
  38. Modeling relational data with graph convolutional networks. In Proceedings of the 15th European Semantic Web Conference, pp.  593–607. Springer, 2018.
  39. Collective classification in network data. AI Magazine, 29(3):93, Sep. 2008. doi: 10.1609/aimag.v29i3.2157.
  40. Pitfalls of graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018, 2018.
  41. Message passing with poincaré embeddings and hyperbolic graph attention networks. In Proceedings of the Eighth International Conference on Learning Representations, 2020.
  42. Attention is all you need. In Advances in neural information processing systems, volume 30, pp.  5998–6008, 2017.
  43. Graph attention networks. In ICLR, 2018.
  44. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019.
  45. Nodeformer: A scalable graph structure learning transformer for node classification. Advances in Neural Information Processing Systems, 35:27387–27401, 2022a.
  46. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020.
  47. Representing long-range context for graph neural networks with global attention. 2022b.
  48. How powerful are graph neural networks? In Proceedings of the International Conference on Learning Representations (ICLR), 2019a.
  49. How powerful are graph neural networks? In Proceedings of the International Conference on Learning Representations, 2019b.
  50. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34:28877–28888, 2021.
  51. Graphgps: Graph generative pre-training with semantic preserving. In Proceedings of the International Conference on Learning Representations, 2021.
  52. Zopf, M. 1-wl expressiveness is (almost) all you need, 2022. URL https://arxiv.org/abs/2202.10156.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com