Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Almost Surely Asymptotically Constant Graph Neural Networks (2403.03880v3)

Published 6 Mar 2024 in cs.LG and cs.LO

Abstract: We present a new angle on the expressive power of graph neural networks (GNNs) by studying how the predictions of real-valued GNN classifiers, such as those classifying graphs probabilistically, evolve as we apply them on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express. This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models, with aggregates including mean and the attention-based mechanism of graph transformers. Our results apply to a broad class of random graph models, including sparse and dense variants of the Erd\H{o}s-R\'enyi model, the stochastic block model, and the Barab\'asi-Albert model. We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. The Surprising Power of Graph Neural Networks with Random Node Initialization. In IJCAI, 2021.
  2. Zero-one laws of graph neural networks. In NeurIPS, 2023.
  3. The Logical Expressiveness of Graph Neural Networks. In ICLR, 2020.
  4. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018.
  5. Can graph neural networks count substructures? In NeurIPS, 2020.
  6. Convergence of message passing graph neural networks with generic aggregation on large random graphs. CoRR, 2304.11140, 2023.
  7. Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015.
  8. Graph neural networks with learnable structural and positional representations. In ICLR, 2021.
  9. Fagin, R. Probabilities on finite models. Journal of Symbolic Logic, 41(1):50–58, 1976.
  10. Neural message passing for quantum chemistry. In ICML, 2017.
  11. Message passing neural networks. Machine learning meets quantum physics, pp.  199–214, 2020.
  12. A new model for learning in graph domains. In IJCNN, 2005.
  13. Zero-one laws and almost sure valuations of first-order logic in semiring semantics. In LICS, 2022.
  14. Kaila, R. On almost sure elimination of numerical quantifiers. Journal of Logic and Computation, 13(2):273–285, 2003.
  15. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer Aided Molecular Design, 30(8):595–608, 2016.
  16. Almost Everywhere Elimination of Probability Quantifiers. Journal of Symbolic Logic, 74(4):1121–42, 2009.
  17. Convergence and stability of graph convolutional networks on large random graphs. In NeurIPS, 2020.
  18. On the universality of graph neural networks on large random graphs. In NeurIPS, 2021.
  19. Infinitary logics and 0–1 laws. Information and Computation, 98(2):258–294, 1992.
  20. Limiting probabilities of first order properties of random sparse graphs and hypergraphs. Random Structures and Algorithms, 60(3):506–526, 2022.
  21. Levie, R. A graphon-signal analysis of graph neural networks. In NeurIPS, 2023.
  22. Lynch, J. Convergence laws for random words. Australian J. Comb., 7:145–156, 1993.
  23. Lynch, J. F. Probabilities of sentences about very sparse random graphs. Random Structures and Algorithms, 3(1):33–54, 1992.
  24. Graph inductive biases in transformers without message passing. In ICML, 2023.
  25. Generalization analysis of message passing neural networks on large random graphs. In NeurIPS, 2022.
  26. Weisfeiler and Leman go neural: Higher-order graph neural networks. In AAAI, 2019.
  27. Recipe for a general, powerful, scalable graph transformer. In NeurIPS, 2022.
  28. Some might say all you need is sum. In IJCAI, 2023.
  29. Random features strengthen graph neural networks. In SDM, 2021.
  30. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
  31. Zero-one laws for sparse random graphs. Journal of the American Mathematical Society, 1(1):97–115, 1988.
  32. Graph neural networks in particle physics. Machine Learning: Science and Technology, 2(2):021001, 2021.
  33. Attention is all you need. In NeurIPS, 2017.
  34. Graph attention networks. In ICLR, 2018.
  35. Vershynin, R. High-dimensional probability : an introduction with applications in data science. Cambridge University Press, Cambridge, 2018.
  36. Representation learning on graphs with jumping knowledge networks. In ICML, 2018.
  37. How powerful are graph neural networks? In ICLR, 2019.
  38. Do transformers really perform badly for graph representation? In NeurIPS, 2021.
  39. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. In ICLR, 2023.
  40. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457–i466, 2018.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets