2000 character limit reached
Graph Neural Networks and Arithmetic Circuits (2402.17805v2)
Published 27 Feb 2024 in cs.LG, cs.AI, and cs.CC
Abstract: We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.
- Training Neural Networks is ∃\exists∃R-complete. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 18293–18306, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/9813b270ed0288e7c0388f0fd4ec68f5-Abstract.html.
- The Logical Expressiveness of Graph Neural Networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, 2020. URL https://openreview.net/forum?id=r1lZ7AEKvB.
- Training fully connected neural networks is ∃\exists∃R-complete. CoRR, abs/2204.01368, 2022. doi: 10.48550/ARXIV.2204.01368. URL https://doi.org/10.48550/arXiv.2204.01368.
- Complexity and Real Computation. Springer-Verlag, Berlin, Heidelberg, 1997. ISBN 0387982817.
- On the Expressive Power of Message-Passing Neural Networks as Global Feature Map Transformers. In Varzinczak, I. (ed.), Foundations of Information and Knowledge Systems - 12th International Symposium, FoIKS 2022, Helsinki, Finland, June 20-23, 2022, Proceedings, volume 13388 of Lecture Notes in Computer Science, pp. 20–34. Springer, 2022. doi: 10.1007/978-3-031-11321-5_2. URL https://doi.org/10.1007/978-3-031-11321-5_2.
- Grohe, M. The Descriptive Complexity of Graph Neural Networks. In LICS, pp. 1–14, 2023. doi: 10.1109/LICS56636.2023.10175735. URL https://doi.org/10.1109/LICS56636.2023.10175735.
- Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions. CoRR, abs/2305.11833, 2023. doi: 10.48550/ARXIV.2305.11833. URL https://doi.org/10.48550/arXiv.2305.11833.
- Deep Neural Networks: The Missing Complexity Parameter. Electron. Colloquium Comput. Complex., TR22-159, 2022. URL https://eccc.weizmann.ac.il/report/2022/159.
- Maass, W. Bounds for the Computational Power and Learning Complexity of Analog Neural Nets. SIAM J. Comput., 26(3):708–732, 1997. doi: 10.1137/S0097539793256041. URL https://doi.org/10.1137/S0097539793256041.
- On the Computational Power of Sigmoid versus Boolean Threshold Circuits. In 32nd Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 1-4 October 1991, pp. 767–776. IEEE Computer Society, 1991. doi: 10.1109/SFCS.1991.185447. URL https://doi.org/10.1109/SFCS.1991.185447.
- Merrill, W. Transformers are Uniform Constant-Depth Threshold Circuits. https://www.youtube.com/watch?v=XF2UoA0oovg, 2022.
- Log-Precision Transformers are Constant-Depth Uniform Threshold Circuits. CoRR, abs/2207.00729, 2022. doi: 10.48550/ARXIV.2207.00729. URL https://doi.org/10.48550/arXiv.2207.00729.
- Saturated Transformers are Constant-Depth Threshold Circuits. Trans. Assoc. Comput. Linguistics, 10:843–856, 2022. URL https://transacl.org/ojs/index.php/tacl/article/view/3465.
- Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 4602–4609. AAAI Press, 2019. doi: 10.1609/AAAI.V33I01.33014602. URL https://doi.org/10.1609/aaai.v33i01.33014602.
- The Power of the Weisfeiler-Leman Algorithm for Machine Learning with Graphs. In Zhou, Z. (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pp. 4543–4550. ijcai.org, 2021. doi: 10.24963/IJCAI.2021/618. URL https://doi.org/10.24963/ijcai.2021/618.
- On the Computational Power of Neural Nets. J. Comput. Syst. Sci., 50(1):132–150, 1995. doi: 10.1006/JCSS.1995.1013. URL https://doi.org/10.1006/jcss.1995.1013.
- Vollmer, H. Introduction to Circuit Complexity - A Uniform Approach. Texts in Theoretical Computer Science. An EATCS Series. Springer, 1999. ISBN 978-3-540-64310-4. doi: 10.1007/978-3-662-03927-4. URL https://doi.org/10.1007/978-3-662-03927-4.
- Šíma, J. Training a Single Sigmoidal Neuron is Hard. Neural Comput., 14(11):2709–2728, 2002. doi: 10.1162/089976602760408035. URL https://doi.org/10.1162/089976602760408035.
- How Powerful are Graph Neural Networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km.