Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Expressive Power of Graph Neural Networks (2401.01626v2)

Published 3 Jan 2024 in cs.LG and cs.AI

Abstract: The study of Graph Neural Networks has received considerable interest in the past few years. By extending deep learning to graph-structured data, GNNs can solve a diverse set of tasks in fields including social science, chemistry, and medicine. The development of GNN architectures has largely been focused on improving empirical performance on tasks like node or graph classification. However, a line of recent work has instead sought to find GNN architectures that have desirable theoretical properties - by studying their expressive power and designing architectures that maximize this expressiveness. While there is no consensus on the best way to define the expressiveness of a GNN, it can be viewed from several well-motivated perspectives. Perhaps the most natural approach is to study the universal approximation properties of GNNs, much in the way that this has been studied extensively for MLPs. Another direction focuses on the extent to which GNNs can distinguish between different graph structures, relating this to the graph isomorphism test. Besides, a GNN's ability to compute graph properties such as graph moments has been suggested as another form of expressiveness. All of these different definitions are complementary and have yielded different recommendations for GNN architecture choices. In this paper, we would like to give an overview of the notion of "expressive power" of GNNs and provide some valuable insights regarding the design choices of GNNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. The surprising power of graph neural networks with random node initialization. CoRR, abs/2010.01179, 2020. URL https://arxiv.org/abs/2010.01179.
  2. Random graph isomorphism. SIAM J. Comput., 9:628–635, 1980.
  3. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proceedings of the International Conference on Learning Representations (ICLR), 2021.
  4. The logical expressiveness of graph neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1lZ7AEKvB.
  5. Improving graph neural network expressivity via subgraph isomorphism counting. CoRR, abs/2006.09252, 2020. URL https://arxiv.org/abs/2006.09252.
  6. On the equivalence between graph isomorphism testing and function approximation with gnns. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/71ee911dd06428a96c143a0b135041a4-Paper.pdf.
  7. Can graph neural networks count substructures? In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  10383–10395. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/75877cb75154206c4e65e76b88a12712-Paper.pdf.
  8. Convolutional neural networks on graphs with fast localized spectral filtering. arXiv preprint arXiv:1606.09375, 2016.
  9. Generalization and representational limits of graph neural networks. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp.  3419–3430. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/garg20c.html.
  10. Universal invariant and equivariant graph neural networks. CoRR, abs/1905.04943, 2019. URL http://arxiv.org/abs/1905.04943.
  11. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  12. Directional message passing for molecular graphs. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1eWbxStPH.
  13. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, 67(1):97–109, 2018.
  14. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  15. Andreas Loukas. What graph neural networks cannot learn: depth vs width. CoRR, abs/1907.03199, 2019. URL http://arxiv.org/abs/1907.03199.
  16. Invariant and equivariant graph networks. CoRR, abs/1812.09902, 2018. URL http://arxiv.org/abs/1812.09902.
  17. On the universality of invariant networks. CoRR, abs/1901.09342, 2019. URL http://arxiv.org/abs/1901.09342.
  18. Weisfeiler and leman go neural: Higher-order graph neural networks. CoRR, abs/1810.02244, 2018. URL http://arxiv.org/abs/1810.02244.
  19. Revisiting graph neural networks: All we have is low-pass filters. arXiv preprint arXiv:1905.09550, 2019.
  20. Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947, 2019.
  21. Structural analysis of viral spreading processes in social and communication networks using egonets. 09 2012.
  22. Approximation ratios of graph neural networks for combinatorial problems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/635440afdfc39fe37995fed127d7df4f-Paper.pdf.
  23. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009. doi: 10.1109/TNN.2008.2005605.
  24. The vapnik-chervonenkis dimension of graph and recursive neural networks. Neural networks : the official journal of the International Neural Network Society, 108:248—259, December 2018. ISSN 0893-6080. doi: 10.1016/j.neunet.2018.08.010. URL https://doi.org/10.1016/j.neunet.2018.08.010.
  25. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  26. Boris Weisfeiler and AA Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. 1968. URL https://www.iti.zcu.cz/wl2018/pdf/wl_paper_translation.pdf.
  27. Simplifying graph convolutional networks. In International conference on machine learning, pp. 6861–6871. PMLR, 2019.
  28. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ashwin Nalwade (2 papers)
  2. Kelly Marshall (3 papers)
  3. Axel Eladi (1 paper)
  4. Umang Sharma (3 papers)