Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 114 tok/s Pro
Kimi K2 228 tok/s Pro
GPT OSS 120B 469 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Flood and Echo Net: Algorithmically Aligned GNNs that Generalize (2310.06970v3)

Published 10 Oct 2023 in cs.LG

Abstract: Most Graph Neural Networks follow the standard message-passing framework where, in each step, all nodes simultaneously communicate with each other. We want to challenge this paradigm by aligning the computation more closely to the execution of distributed algorithms and propose the Flood and Echo Net. A single round of a Flood and Echo Net consists of an origin node and a flooding phase followed by an echo phase. First, during the flooding, messages are sent from the origin and propagated outwards throughout the entire graph. Then, during the echo, the message flow reverses and messages are sent back towards the origin. As nodes are only sparsely activated upon receiving a message, this leads to a wave-like activation pattern that traverses the graph. Through these sparse but parallel activations, the Net becomes more expressive than traditional MPNNs which are limited by the 1-WL test and also is provably more efficient in terms of message complexity. Moreover, the mechanism's inherent ability to generalize across graphs of varying sizes positions it as a practical architecture for the task of algorithmic learning. We test the Flood and Echo Net on a variety of synthetic tasks and the SALSA-CLRS benchmark and find that the algorithmic alignment of the execution improves generalization to larger graph sizes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. On the bottleneck of graph neural networks and its practical implications, 2021a.
  2. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=i80OPhOCVH2.
  3. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  4. On the equivalence between graph isomorphism testing and function approximation with gnns, 2023.
  5. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
  6. Graph neural networks are dynamic programmers, 2022.
  7. Asynchronous algorithmic alignment with cocycles, 2023.
  8. Parallel algorithms align with neural execution, 2023.
  9. Asynchronous message passing: A new framework for learning in graphs, 2023. URL https://openreview.net/forum?id=2_I3JQ70U2.
  10. Generalization and representational limits of graph neural networks, 2020.
  11. Learning graph algorithms with recurrent graph neural networks, 2022.
  12. A generalist neural algorithmic learner, 2022.
  13. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
  14. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  15. Deterministic subgraph detection in broadcast congest. arXiv preprint arXiv:1705.10195, 2017.
  16. AA Leman and Boris Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya, 2(9):12–16, 1968.
  17. Andreas Loukas. What graph neural networks cannot learn: depth vs width, 2020.
  18. Provably powerful graph networks, 2020.
  19. Agent-based graph neural networks, 2023.
  20. Salsa-clrs: A sparse and scalable benchmark for algorithmic reasoning, 2023.
  21. Traffic4cast at neurips 2022 – predict dynamics along graph edges from sparse node data: Whole city traffic and eta from stationary vehicle detectors. In Marco Ciccone, Gustavo Stolovitzky, and Jacob Albrecht (eds.), Proceedings of the NeurIPS 2022 Competitions Track, volume 220 of Proceedings of Machine Learning Research, pp.  251–278. PMLR, 28 Nov–09 Dec 2022. URL https://proceedings.mlr.press/v220/neun22a.html.
  22. An introduction to graph neural networks from a distributed computing perspective. In Raju Bapi, Sandeep Kulkarni, Swarup Mohalik, and Sathya Peri (eds.), Distributed Computing and Intelligent Technology, pp.  26–44, Cham, 2022a. Springer International Publishing. ISBN 978-3-030-94876-4.
  23. A theoretical comparison of graph neural network extensions, 2022b.
  24. Dropgnn: Random dropouts increase the expressiveness of graph neural networks, 2021.
  25. Recipe for a general, powerful, scalable graph transformer. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=lMMaNf6oxKM.
  26. Approximation ratios of graph neural networks for combinatorial problems, 2019.
  27. Random features strengthen graph neural networks, 2021.
  28. The graph neural network model. IEEE transactions on neural networks, 20(1):61–80, 2008.
  29. Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks. In Advances in Neural Information Processing Systems, volume 34, pp.  6695–6706. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/3501672ebc68a5524629080e3ef60aef-Paper.pdf.
  30. Towards scale-invariant graph-related problem solving by iterative homogeneous gnns. Advances in Neural Information Processing Systems, 33:15811–15822, 2020.
  31. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  32. The CLRS algorithmic reasoning benchmark. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp.  22084–22102. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/velickovic22a.html.
  33. Collective dynamics of ‘small-world’networks. nature, 393(6684):440–442, 1998.
  34. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
  35. What can neural networks reason about?, 2020.
  36. How neural networks extrapolate: From feedforward to graph neural networks, 2021.
  37. Rethinking the expressive power of gnns via graph biconnectivity, 2023.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube