Papers
Topics
Authors
Recent
Search
2000 character limit reached

Local Vertex Colouring Graph Neural Networks

Published 10 Mar 2024 in cs.LG | (2403.06080v1)

Abstract: In recent years, there has been a significant amount of research focused on expanding the expressivity of Graph Neural Networks (GNNs) beyond the Weisfeiler-Lehman (1-WL) framework. While many of these studies have yielded advancements in expressivity, they have frequently come at the expense of decreased efficiency or have been restricted to specific types of graphs. In this study, we investigate the expressivity of GNNs from the perspective of graph search. Specifically, we propose a new vertex colouring scheme and demonstrate that classical search algorithms can efficiently compute graph representations that extend beyond the 1-WL. We show the colouring scheme inherits useful properties from graph search that can help solve problems like graph biconnectivity. Furthermore, we show that under certain conditions, the expressivity of GNNs increases hierarchically with the radius of the search neighbourhood. To further investigate the proposed scheme, we develop a new type of GNN based on two search strategies, breadth-first search and depth-first search, highlighting the graph properties they can capture on top of 1-WL. Our code is available at https://github.com/seanli3/lvc.

Authors (3)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42–59, 2020. ISSN 0022-0000. doi: https://doi.org/10.1016/j.jcss.2020.04.003.
  2. Modeling edge features with deep bayesian graph networks. In International Joint Conference on Neural Networks, IJCNN 2021, Shenzhen, China, July 18-22, 2021, pp.  1–8. IEEE, 2021. doi: 10.1109/IJCNN52387.2021.9533430.
  3. Equivariant subgraph aggregation networks. International Conference on Learning Representations, 2022.
  4. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34:2625–2640, 2021a.
  5. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning (ICML), pp. 1026–1037, 2021b.
  6. Protein function prediction via graph kernels. Bioinformatics, 21 Suppl 1:i47–56, June 2005.
  7. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  8. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389–410, 1992.
  9. The infinite contextual graph markov model. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., and Sabato, S. (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp.  2721–2737. PMLR, 2022.
  10. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In AAAI Conference on Artificial Intelligence, volume 34, pp. 3438–3445, 2020.
  11. Adaptive universal generalized pagerank graph neural network. In 9th International Conference on Learning Representations, ICLR, 2021.
  12. Introduction to Algorithms, fourth edition. MIT Press, April 2022.
  13. Convolutional neural networks on graphs with fast localized spectral filtering. In Neural Information Processing Systems (NeurIPS), pp. 3844–3852, 2016.
  14. Distinguishing enzyme structures from non-enzymes without alignments. J. Mol. Biol., 330(4):771–783, July 2003.
  15. A fair comparison of graph neural networks for graph classification. In International Conference on Learning Representations (ICLR), 2020.
  16. Fast graph representation learning with pytorch geometric. In Representation Learning on Graphs and Manifolds Workshop, International Conference on Learning Representations ICLR, 2019.
  17. Expressiveness and approximation properties of graph neural networks. In International Conference on Learning Representations (ICLR), 2022.
  18. Neural bipartite matching. In Workshop of the 37th International Conference on Machine Learning ICML, 2020.
  19. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML), pp. 1263–1272, 2017.
  20. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pp.  1024–1034, 2017.
  21. Bernnet: Learning arbitrary graph spectral filters via bernstein approximation. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.  14239–14251, 2021.
  22. Algorithm 447: efficient algorithms for graph manipulation. Commun. ACM, 16(6):372–378, June 1973.
  23. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257, 1991. doi: 10.1016/0893-6080(91)90009-T.
  24. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989. doi: 10.1016/0893-6080(89)90020-8.
  25. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
  26. Predict then propagate: Graph neural networks meet personalized pagerank. In Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019.
  27. Chessboard graphs, related designs, and domination parameters. Journal of Statistical Planning and Inference, 76(1):285–294, 1999. ISSN 0378-3758.
  28. Distance encoding: Design provably more powerful neural networks for graph representation learning. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
  29. Loukas, A. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations (ICLR), 2020.
  30. Provably powerful graph networks. Advances in Neural Information Processing Systems (NeurIPS), 32, 2019.
  31. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI Conference on Artificial Intelligence, volume 33, pp. 4602–4609, 2019.
  32. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Neural Information Processing Systems (NeurIPS), 33, 2020.
  33. Geom-gcn: Geometric graph convolutional networks. In 8th International Conference on Learning Representations, ICLR, 2020.
  34. BRENDA, the enzyme database: updates and major new developments. Nucleic Acids Res., 32(Database issue):D431–3, January 2004.
  35. Shrikhande, S. S. The Uniqueness of the L2subscriptL2\mathrm{L}_{2}roman_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT Association Scheme. The Annals of Mathematical Statistics, 30(3):781 – 798, 1959.
  36. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp.  29–38. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.11.
  37. Tarjan, R. E. A note on finding the bridges of a graph. Inf. Process. Lett., 2(6):160–161, April 1974.
  38. Understanding over-squashing and bottlenecks on graphs via curvature. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
  39. Graph attention networks. International Conference on Learning Representations (ICLR), 2017.
  40. Neural execution of graph algorithms. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  41. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowl. Inf. Syst., 14(3):347–375, March 2008.
  42. $\mathscr{N}$-WL: A new hierarchy of expressivity for graph neural networks. In The Eleventh International Conference on Learning Representations, 2023.
  43. Query-by-Sketch: Scaling shortest path graph queries on very large networks. In Proceedings of the 2021 International Conference on Management of Data, SIGMOD ’21, pp.  1946–1958, New York, NY, USA, June 2021. Association for Computing Machinery.
  44. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12–16, 1968.
  45. A new perspective on” how graph neural networks go beyond weisfeiler-lehman?”. In International Conference on Learning Representations, 2022.
  46. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019.
  47. What can neural networks reason about? In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  48. Hierarchical graph representation learning with differentiable pooling. In Bengio, S., Wallach, H. M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp.  4805–4815, 2018.
  49. Position-aware graph neural networks. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp.  7134–7143. PMLR, 2019.
  50. Rethinking the expressive power of GNNs via graph biconnectivity. In 11th International Conference on Learning Representations, ICLR 2020, Kigali, Rwanda , May 1-5, 2023, 2023.
  51. An end-to-end deep learning architecture for graph classification. In AAAI Conference on Artificial Intelligence, 2018.
  52. Pairnorm: Tackling oversmoothing in gnns. In International Conference on Learning Representations (ICLR), 2019.
  53. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793–7804, 2020.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.