Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Targeted Messages More Effective? (2403.06817v2)

Published 11 Mar 2024 in cs.LO, cs.AI, and cs.LG

Abstract: Graph neural networks (GNN) are deep learning architectures for graphs. Essentially, a GNN is a distributed message passing algorithm, which is controlled by parameters learned from data. It operates on the vertices of a graph: in each iteration, vertices receive a message on each incoming edge, aggregate these messages, and then update their state based on their current state and the aggregated messages. The expressivity of GNNs can be characterised in terms of certain fragments of first-order logic with counting and the Weisfeiler-Lehman algorithm. The core GNN architecture comes in two different versions. In the first version, a message only depends on the state of the source vertex, whereas in the second version it depends on the states of the source and target vertices. In practice, both of these versions are used, but the theory of GNNs so far mostly focused on the first one. On the logical side, the two versions correspond to two fragments of first-order logic with counting that we call modal and guarded. The question whether the two versions differ in their expressivity has been mostly overlooked in the GNN literature and has only been asked recently (Grohe, LICS'23). We answer this question here. It turns out that the answer is not as straightforward as one might expect. By proving that the modal and guarded fragment of first-order logic with counting have the same expressivity over labelled undirected graphs, we show that in a non-uniform setting the two GNN versions have the same expressivity. However, we also prove that in a uniform setting the second version is strictly more expressive.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Franz Baader and Filippo De Bortoli “On the Expressive Power of Description Logics with Cardinality Constraints on Finite and Infinite Sets” In Frontiers of Combining Systems - 12th International Symposium, FroCoS 2019, London, UK, September 4-6, 2019, Proceedings 11715, Lecture Notes in Computer Science Springer, 2019, pp. 203–219 DOI: 10.1007/978-3-030-29007-8“˙12
  2. “The Logical Expressiveness of Graph Neural Networks” In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 OpenReview.net, 2020 URL: https://openreview.net/forum?id=r1lZ7AEKvB
  3. David A.Mix Barrington, Neil Immerman and Howard Straubing “On Uniformity within NC11{{}^{1}}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT” In J. Comput. Syst. Sci. 41.3, 1990, pp. 274–306 DOI: 10.1016/0022-0000(90)90022-D
  4. “Casting a graph net to catch dark showers” In SciPost Physics 10, 2021 DOI: 10.21468/SciPostPhys.10.2.046
  5. “Combinatorial Optimization and Reasoning with Graph Neural Networks” In J. Mach. Learn. Res. 24, 2023, pp. 130:1–130:61 URL: http://jmlr.org/papers/v24/21-0449.html
  6. “Machine Learning on Graphs: A Model and Comprehensive Taxonomy” In Journal of Machine Learning Research 23.89, 2022, pp. 1–64
  7. Kit Fine “In so many possible worlds” In Notre Dame J. Formal Log. 13.4, 1972, pp. 516–520 DOI: 10.1305/NDJFL/1093890715
  8. “Graph echo state networks” In Proceedings of the IEEE International Joint Conference on Neural Networks, 2010
  9. “Neural Message Passing for Quantum Chemistry” In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017 70, Proceedings of Machine Learning Research PMLR, 2017, pp. 1263–1272 URL: http://proceedings.mlr.press/v70/gilmer17a.html
  10. Erich Grädel, Martin Otto and Eric Rosen “Two-Variable Logic with Counting is Decidable” In Proceedings, 12th Annual IEEE Symposium on Logic in Computer Science, Warsaw, Poland, June 29 - July 2, 1997 IEEE Computer Society, 1997, pp. 306–317 DOI: 10.1109/LICS.1997.614957
  11. Martin Grohe “The Descriptive Complexity of Graph Neural Networks” In Proceedings of the 38th Annual ACM/IEEE Symposium on Logic in Computer Science, 2023 DOI: 10.1109/LICS56636.2023.10175735
  12. Martin Grohe “The Descriptive Complexity of Graph Neural Networks” In ArXiv 2303.04613, 2023 DOI: 10.48550/arXiv.2303.04613
  13. Martin Grohe “The Logic of Graph Neural Networks” In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021 IEEE, 2021, pp. 1–17 DOI: 10.1109/LICS52264.2021.9470677
  14. William Hesse “Division Is in Uniform TC00{}^{\mbox{0}}start_FLOATSUPERSCRIPT 0 end_FLOATSUPERSCRIPT” In Automata, Languages and Programming, 28th International Colloquium, ICALP 2001, Crete, Greece, July 8-12, 2001, Proceedings 2076, Lecture Notes in Computer Science Springer, 2001, pp. 104–114 DOI: 10.1007/3-540-48224-5˙9
  15. William Hesse, Eric Allender and David A.Mix Barrington “Uniform constant-depth threshold circuits for division and iterated multiplication” In J. Comput. Syst. Sci. 65.4, 2002, pp. 695–716 DOI: 10.1016/S0022-0000(02)00025-9
  16. “Qualifying Number Restrictions in Concept Languages” In Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR’91). Cambridge, MA, USA, April 22-25, 1991 Morgan Kaufmann, 1991, pp. 335–346 URL: https://dblp.org/rec/conf/kr/HollunderB91
  17. “Describing graphs: A first-order approach to graph canonization” In Complexity theory retrospective Springer-Verlag, 1990, pp. 59–81
  18. Neil Immerman “Descriptive complexity”, Graduate texts in computer science Springer, 1999 DOI: 10.1007/978-1-4612-0539-5
  19. Emanuel Kieronski, Ian Pratt-Hartmann and Lidia Tendera “Two-variable logics with counting and semantic constraints” In ACM SIGLOG News 5.3, 2018, pp. 22–43 DOI: 10.1145/3242953.3242958
  20. “Semi-supervised classification with graph convolutional networks” In Proceedings of the 5th International Conference on Learning Representations, 2017
  21. “First-order logic with counting” In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017 IEEE Computer Society, 2017, pp. 1–12 DOI: 10.1109/LICS.2017.8005133
  22. “Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks” In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019 AAAI Press, 2019, pp. 4602–4609 DOI: 10.1609/AAAI.V33I01.33014602
  23. Martin Otto “Bounded variable logics and counting – A study in finite models” 9, Lecture Notes in Logic Springer Verlag, 1997
  24. Martin Otto “Graded modal logic and counting bisimulation” In ArXiv 1910.00039, 2019 URL: http://arxiv.org/abs/1910.00039
  25. Ian Pratt-Hartmann “Complexity of the Guarded Two-variable Fragment with Counting Quantifiers” In J. Log. Comput. 17.1, 2007, pp. 133–155 DOI: 10.1093/LOGCOM/EXL034
  26. Eran Rosenbluth, Jan Toenshoff and Martin Grohe “Some Might Say All You Need Is Sum” In Proceedings of the 32nd International Joint Conference on Artificial Intelligence, 2023, pp. 4172–4179 DOI: 10.24963/ijcai.2023/464
  27. “The graph neural network model” In IEEE Transactions on Neural Networks 20.1, 2009, pp. 61–80
  28. “Do We Need Anisotropic Graph Neural Networks?” In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 OpenReview.net, 2022 URL: https://openreview.net/forum?id=hl9ePdHO4%5C_s
  29. Stephan Tobies “PSPACE Reasoning for Graded Modal Logics” In J. Log. Comput. 11.1, 2001, pp. 85–106 DOI: 10.1093/LOGCOM/11.1.85
  30. “How Powerful are Graph Neural Networks?” In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 OpenReview.net, 2019 URL: https://openreview.net/forum?id=ryGs6iA5Km
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com