Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Union Subgraph Neural Networks (2305.15747v3)

Published 25 May 2023 in cs.LG

Abstract: Graph Neural Networks (GNNs) are widely used for graph representation learning in many application domains. The expressiveness of vanilla GNNs is upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on rooted subtrees through iterative message passing. In this paper, we empower GNNs by injecting neighbor-connectivity information extracted from a new type of substructure. We first investigate different kinds of connectivities existing in a local neighborhood and identify a substructure called union subgraph, which is able to capture the complete picture of the 1-hop neighborhood of an edge. We then design a shortest-path-based substructure descriptor that possesses three nice properties and can effectively encode the high-order connectivities in union subgraphs. By infusing the encoded neighbor connectivities, we propose a novel model, namely Union Subgraph Neural Network (UnionSNN), which is proven to be strictly more powerful than 1-WL in distinguishing non-isomorphic graphs. Additionally, the local encoding from union subgraphs can also be injected into arbitrary message-passing neural networks (MPNNs) and Transformer-based models as a plugin. Extensive experiments on 18 benchmarks of both graph-level and node-level tasks demonstrate that UnionSNN outperforms state-of-the-art baseline models, with competitive computational efficiency. The injection of our local encoding to existing models is able to boost the performance by up to 11.09%. Our code is available at https://github.com/AngusMonroe/UnionSNN.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Shortest Path Networks for Graph Property Prediction. arXiv preprint arXiv:2206.01003.
  2. Graph-based deep learning for medical diagnosis and analysis: past, present and future. Sensors, 21(14): 4758.
  3. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning, 1026–1037. PMLR.
  4. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence.
  5. Brandes, U. 2001. A faster algorithm for betweenness centrality. Journal of mathematical sociology, 25(2): 163–177.
  6. Residual gated graph convnets. arXiv preprint arXiv:1711.07553.
  7. Breadth-first search. Catalogue of artificial intelligence tools, 13–13.
  8. Discriminative embeddings of latent variable models for structured data. In International conference on machine learning, 2702–2711. PMLR.
  9. Eta prediction with graph neural networks in google maps. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 3767–3776.
  10. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28.
  11. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699.
  12. Benchmarking Graph Neural Networks. arXiv preprint arXiv:2003.00982.
  13. Graph neural networks with learnable structural and positional representations. arXiv preprint arXiv:2110.07875.
  14. Graph neural networks for social recommendation. In The world wide web conference, 417–426.
  15. Fast graph representation learning with PyTorch Geometric. arXiv preprint arXiv:1903.02428.
  16. Inductive representation learning on large graphs. Advances in neural information processing systems, 30.
  17. Topological graph neural networks. arXiv preprint arXiv:2102.07835.
  18. Matrix analysis. Cambridge university press.
  19. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33: 22118–22133.
  20. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265.
  21. Boosting the Cycle Counting Power of Graph Neural Networks with I2superscript𝐼2I^{2}italic_I start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-GNNs. arXiv preprint arXiv:2210.13978.
  22. Benchmark Data Sets for Graph Kernels.
  23. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  24. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  25. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34: 21618–21629.
  26. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33: 4465–4478.
  27. Sign and basis invariant networks for spectral graph representation learning. arXiv preprint arXiv:2202.13013.
  28. Ricci curvature of graphs. Tohoku Mathematical Journal, Second Series, 63(4): 605–627.
  29. Geniepath: Graph neural networks with adaptive receptive paths. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 4424–4431.
  30. Provably powerful graph networks. Advances in neural information processing systems, 32.
  31. GPS++: An Optimised Hybrid MPNN/Transformer for Molecular Property Prediction. arXiv preprint arXiv:2212.02229.
  32. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, 43–52.
  33. Graphit: Encoding graph structure in transformers. arXiv preprint arXiv:2106.05667.
  34. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, 4602–4609.
  35. Universal Graph Transformer Self-Attention Networks. arXiv preprint arXiv:1909.11855.
  36. Maximum Entropy Weighted Independent Set Pooling for Graph Neural Networks. arXiv preprint arXiv:2107.01410.
  37. Ollivier, Y. 2009. Ricci curvature of Markov chains on metric spaces. Journal of Functional Analysis, 256(3): 810–864.
  38. Dropgnn: random dropouts increase the expressiveness of graph neural networks. Advances in Neural Information Processing Systems, 34: 21997–22009.
  39. A Theoretical Comparison of Graph Neural Network Extensions. arXiv preprint arXiv:2201.12884.
  40. Automatic differentiation in pytorch.
  41. Spatial temporal incidence dynamic graph neural networks for traffic flow forecasting. Information Sciences, 521: 277–290.
  42. Recipe for a General, Powerful, Scalable Graph Transformer. arXiv preprint arXiv:2205.12454.
  43. Collective classification in network data. AI magazine, 29(3): 93–93.
  44. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509.
  45. Link Prediction with Non-Contrastive Learning. arXiv preprint arXiv:2211.14394.
  46. Expressive and Efficient Representation Learning for Ranking Links in Temporal Graphs. In Proceedings of the ACM Web Conference 2023, 567–577.
  47. Towards scale-invariant graph-related problem solving by iterative homogeneous gnns. Advances in Neural Information Processing Systems, 33: 15811–15822.
  48. Autobahn: Automorphism-based graph neural nets. Advances in Neural Information Processing Systems, 34: 29922–29934.
  49. Understanding over-squashing and bottlenecks on graphs via curvature. arXiv preprint arXiv:2111.14522.
  50. Graph attention networks. arXiv preprint arXiv:1710.10903.
  51. Building powerful and equivariant graph neural networks with structural message-passing. Advances in neural information processing systems, 33: 14143–14155.
  52. Principled hyperedge prediction with structural spectral features and neural networks. arXiv preprint arXiv:2106.04292.
  53. scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses. Nature communications, 12(1): 1882.
  54. Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs.
  55. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9): 12–16.
  56. A New Perspective on” How Graph Neural Networks Go Beyond Weisfeiler-Lehman?”. In International Conference on Learning Representations.
  57. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems, 34: 13266–13279.
  58. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826.
  59. Curvature graph network. In International Conference on Learning Representations.
  60. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34: 28877–28888.
  61. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31.
  62. Design space for graph neural networks. Advances in Neural Information Processing Systems, 33: 17009–17021.
  63. Rethinking the expressive power of gnns via graph biconnectivity. arXiv preprint arXiv:2301.09505.
  64. Link prediction based on graph neural networks. Advances in neural information processing systems, 31.
  65. Nested graph neural networks. Advances in Neural Information Processing Systems, 34: 15734–15747.
  66. From stars to subgraphs: Uplifting any GNN with local structure awareness. arXiv preprint arXiv:2110.03753.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiaxing Xu (17 papers)
  2. Aihu Zhang (3 papers)
  3. Qingtian Bian (7 papers)
  4. Vijay Prakash Dwivedi (15 papers)
  5. Yiping Ke (24 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.