Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Two Sides of Redundancy in Graph Neural Networks (2310.04190v2)

Published 6 Oct 2023 in cs.LG

Abstract: Message passing neural networks iteratively generate node embeddings by aggregating information from neighboring nodes. With increasing depth, information from more distant nodes is included. However, node embeddings may be unable to represent the growing node neighborhoods accurately and the influence of distant nodes may vanish, a problem referred to as oversquashing. Information redundancy in message passing, i.e., the repetitive exchange and encoding of identical information amplifies oversquashing. We develop a novel aggregation scheme based on neighborhood trees, which allows for controlling redundancy by pruning redundant branches of unfolding trees underlying standard message passing. While the regular structure of unfolding trees allows the reuse of intermediate results in a straightforward way, the use of neighborhood trees poses computational challenges. We propose compact representations of neighborhood trees and merge them, exploiting computational redundancy by identifying isomorphic subtrees. From this, node and graph embeddings are computed via a neural architecture inspired by tree canonization techniques. Our method is less susceptible to oversquashing than traditional message passing neural networks and can improve the accuracy on widely used benchmark datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. The Surprising Power of Graph Neural Networks with Random Node Initialization. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, pages 2112--2118, 2021.
  2. Shortest path networks for graph property prediction. In LoG 2022, volume 198 of PMLR, 2022. URL https://proceedings.mlr.press/v198/abboud22a.html.
  3. MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In ICML 2019, volume 97 of PMLR, 2019. URL http://proceedings.mlr.press/v97/abu-el-haija19a.html.
  4. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. ISBN 0-201-00029-6.
  5. U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications. In ICLR 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=i80OPhOCVH2.
  6. Approximating the graph edit distance with compact neighborhood representations. 2023.
  7. Understanding oversquashing in GNNs through the lens of effective resistance. In International Conference on Machine Learning, volume 202 of PMLR, pages 2528--2547. PMLR, 23--29 Jul 2023. URL https://proceedings.mlr.press/v202/black23a.html.
  8. Redundancy-free message passing for graph neural networks. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 4316--4327. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/1bd6f17639876b4856026744932ec76f-Paper-Conference.pdf.
  9. Learning to extract symbolic knowledge from the world wide web. In AAAI/IAAI, 1998.
  10. A fair comparison of graph neural networks for graph classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HygDF6NFPB.
  11. Citeseer: an automatic citation indexing system. In Digital library, 1998.
  12. Neural message passing for quantum chemistry. In International Conference on Machine Learning, 2017.
  13. On over-squashing in message passing neural networks: The impact of width, depth, and topology. volume 202 of PMLR. PMLR, 2023.
  14. On the trade-off between over-smoothing and over-squashing in deep graph neural networks. In CIKM, 2023.
  15. S. Jegelka. Theory of graph neural networks: Representation and learning. CoRR, abs/2204.07697, 2022.
  16. Redundancy-free computation for graph neural networks. In KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 997--1005. ACM, 2020. 10.1145/3394486.3403142. URL https://doi.org/10.1145/3394486.3403142.
  17. N. M. Kriege. Weisfeiler and leman go walking: Random walk kernels revisited. In NeurIPS, 2022.
  18. On valid optimal assignment kernels and applications to graph classification. In International Conference on Neural Information Processing Systems, NIPS, 2016.
  19. Distance encoding: Design provably more powerful neural networks for graph representation learning. In NeurIPS, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/2f73168bf3656f697507752ec592c437-Abstract.html.
  20. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. AAAI Press, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16098.
  21. Towards deeper graph neural networks. In KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2020. 10.1145/3394486.3403076.
  22. Extensions of marginalized graph kernels. In Proceedings of the twenty-first international conference on Machine learning. ACM, 2004. ISBN 1-58113-838-5. URL http://doi.acm.org/10.1145/1015330.1015446.
  23. Provably Powerful Graph Networks. In Advances in Neural Information Processing Systems, 2019.
  24. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127--163, 2000.
  25. K. Mehlhorn and P. Sanders. Algorithms and Data Structures: The Basic Toolbox. Springer, 2008. ISBN 978-3-540-77977-3. URL https://doi.org/10.1007/978-3-540-77978-0.
  26. Path neural networks: Expressive and accurate graph neural networks. In Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.
  27. TUDataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond, 2020. URL http://www.graphlearning.io.
  28. Relational Pooling for Graph Representations. In Proceedings of the 36th International Conference on Machine Learning, pages 4663--4673, 2019.
  29. Weisfeiler and leman go hyperbolic: Learning distance preserving node representations. In International Conference on Artificial Intelligence and Statistics, volume 206 of PMLR, 2023. URL https://proceedings.mlr.press/v206/nikolentzos23a.html.
  30. Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20(1):81--102, 2009. 10.1109/TNN.2008.2005141.
  31. Collective classification in network data. In The AI Magazine, 2008.
  32. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(77):2539--2561, 2011.
  33. Beyond Homophily: Structure-aware Path Aggregation Graph Neural Network. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, pages 2233--2240, 2022.
  34. Understanding over-squashing and bottlenecks on graphs via curvature. In International Conference on Learning Representations, ICLR. OpenReview.net, 2022. URL https://openreview.net/forum?id=7UmjRGzp-A.
  35. Graph attention networks. In 6th International Conference on Learning Representations, 2018.
  36. Representation learning on graphs with jumping knowledge networks. volume 80 of PMLR. PMLR, 2018.
  37. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR, 2019.
  38. SPAGAN: shortest path graph attention network. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. IJCAI, 2019. URL https://doi.org/10.24963/ijcai.2019/569.
  39. Identity-aware graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence. AAAI Press, 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/17283.
  40. Beyond homophily in graph neural networks: Current limitations and effective designs. In NeurIPS, 2020.
Citations (1)

Summary

We haven't generated a summary for this paper yet.