Almost Surely Asymptotically Constant Graph Neural Networks (2403.03880v3)
Abstract: We present a new angle on the expressive power of graph neural networks (GNNs) by studying how the predictions of real-valued GNN classifiers, such as those classifying graphs probabilistically, evolve as we apply them on larger graphs drawn from some random graph model. We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express. This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models, with aggregates including mean and the attention-based mechanism of graph transformers. Our results apply to a broad class of random graph models, including sparse and dense variants of the Erd\H{o}s-R\'enyi model, the stochastic block model, and the Barab\'asi-Albert model. We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.
- The Surprising Power of Graph Neural Networks with Random Node Initialization. In IJCAI, 2021.
- Zero-one laws of graph neural networks. In NeurIPS, 2023.
- The Logical Expressiveness of Graph Neural Networks. In ICLR, 2020.
- Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018.
- Can graph neural networks count substructures? In NeurIPS, 2020.
- Convergence of message passing graph neural networks with generic aggregation on large random graphs. CoRR, 2304.11140, 2023.
- Convolutional networks on graphs for learning molecular fingerprints. In NIPS, 2015.
- Graph neural networks with learnable structural and positional representations. In ICLR, 2021.
- Fagin, R. Probabilities on finite models. Journal of Symbolic Logic, 41(1):50–58, 1976.
- Neural message passing for quantum chemistry. In ICML, 2017.
- Message passing neural networks. Machine learning meets quantum physics, pp. 199–214, 2020.
- A new model for learning in graph domains. In IJCNN, 2005.
- Zero-one laws and almost sure valuations of first-order logic in semiring semantics. In LICS, 2022.
- Kaila, R. On almost sure elimination of numerical quantifiers. Journal of Logic and Computation, 13(2):273–285, 2003.
- Molecular graph convolutions: moving beyond fingerprints. Journal of Computer Aided Molecular Design, 30(8):595–608, 2016.
- Almost Everywhere Elimination of Probability Quantifiers. Journal of Symbolic Logic, 74(4):1121–42, 2009.
- Convergence and stability of graph convolutional networks on large random graphs. In NeurIPS, 2020.
- On the universality of graph neural networks on large random graphs. In NeurIPS, 2021.
- Infinitary logics and 0–1 laws. Information and Computation, 98(2):258–294, 1992.
- Limiting probabilities of first order properties of random sparse graphs and hypergraphs. Random Structures and Algorithms, 60(3):506–526, 2022.
- Levie, R. A graphon-signal analysis of graph neural networks. In NeurIPS, 2023.
- Lynch, J. Convergence laws for random words. Australian J. Comb., 7:145–156, 1993.
- Lynch, J. F. Probabilities of sentences about very sparse random graphs. Random Structures and Algorithms, 3(1):33–54, 1992.
- Graph inductive biases in transformers without message passing. In ICML, 2023.
- Generalization analysis of message passing neural networks on large random graphs. In NeurIPS, 2022.
- Weisfeiler and Leman go neural: Higher-order graph neural networks. In AAAI, 2019.
- Recipe for a general, powerful, scalable graph transformer. In NeurIPS, 2022.
- Some might say all you need is sum. In IJCAI, 2023.
- Random features strengthen graph neural networks. In SDM, 2021.
- The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
- Zero-one laws for sparse random graphs. Journal of the American Mathematical Society, 1(1):97–115, 1988.
- Graph neural networks in particle physics. Machine Learning: Science and Technology, 2(2):021001, 2021.
- Attention is all you need. In NeurIPS, 2017.
- Graph attention networks. In ICLR, 2018.
- Vershynin, R. High-dimensional probability : an introduction with applications in data science. Cambridge University Press, Cambridge, 2018.
- Representation learning on graphs with jumping knowledge networks. In ICML, 2018.
- How powerful are graph neural networks? In ICLR, 2019.
- Do transformers really perform badly for graph representation? In NeurIPS, 2021.
- Rethinking the Expressive Power of GNNs via Graph Biconnectivity. In ICLR, 2023.
- Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):i457–i466, 2018.