Topology-Informed Graph Transformer (2402.02005v1)
Abstract: Transformers have revolutionized performance in Natural Language Processing and Vision, paving the way for their integration with Graph Neural Networks (GNNs). One key challenge in enhancing graph transformers is strengthening the discriminative power of distinguishing isomorphisms of graphs, which plays a crucial role in boosting their predictive performances. To address this challenge, we introduce 'Topology-Informed Graph Transformer (TIGT)', a novel transformer enhancing both discriminative power in detecting graph isomorphisms and the overall performance of Graph Transformers. TIGT consists of four components: A topological positional embedding layer using non-isomorphic universal covers based on cyclic subgraphs of graphs to ensure unique graph representation: A dual-path message-passing layer to explicitly encode topological characteristics throughout the encoder layers: A global attention mechanism: And a graph information layer to recalibrate channel-wise graph features for better feature representation. TIGT outperforms previous Graph Transformers in classifying synthetic dataset aimed at distinguishing isomorphism classes of graphs. Additionally, mathematical analysis and empirical evaluations highlight our model's competitive edge over state-of-the-art Graph Transformers across various benchmark datasets.
- On the bottleneck of graph neural networks and its practical implications, 2021.
- Directional graph networks, 2021.
- Equivariant subgraph aggregation networks, 2022.
- Weisfeiler and lehman go cellular: Cw networks. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 2625–2640. Curran Associates, Inc., 2021a. URL https://proceedings.neurips.cc/paper/2021/file/157792e4abb490f99dbd738483e0d2d4-Paper.pdf.
- Weisfeiler and lehman go topological: Message passing simplicial networks, 2021b.
- Improving graph neural network expressivity via subgraph isomorphism counting, 2021.
- Residual gated graph convnets, 2018.
- Perslay: A neural network layer for persistence diagrams and new graph topological signatures, 2020.
- Structure-aware transformer for graph representation learning, 2022.
- Cycle to clique (cy2c) graph neural network: A sight to see beyond neighborhood aggregation. In The Eleventh International Conference on Learning Representations, 2023.
- Principal neighbourhood aggregation for graph nets, 2020.
- An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
- A generalization of transformer networks to graphs, 2021.
- Graph neural networks with learnable structural and positional representations, 2022.
- Long range graph benchmark, 2023.
- Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
- Hatcher, A. Algebraic Topology. Cambridge University Press, 2002.
- Topological graph neural networks, 2022.
- Squeeze-and-excitation networks, 2019.
- Strategies for pre-training graph neural networks, 2020.
- Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, aug 2022. doi: 10.1145/3534678.3539296. URL https://doi.org/10.1145%2F3534678.3539296.
- Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757–1768, 2012.
- Semi-supervised classification with graph convolutional networks, 2017.
- Goat: A global transformer on large-scale graphs. In International Conference on Machine Learning, pp. 17375–17390. PMLR, 2023.
- Rethinking graph transformers with spectral attention, 2021.
- Lovasz, L. Random walks on graphs: a survey. Combinatorics, pp. 1–46, 1993.
- Your transformer may not be as powerful as you expect, 2022.
- Graph inductive biases in transformers without message passing, 2023.
- Weisfeiler and leman go neural: Higher-order graph neural networks, 2021.
- Relational pooling for graph representations. In International Conference on Machine Learning, pp. 4663–4673. PMLR, 2019.
- Graph neural networks exponentially lose expressive power for node classification, 2021.
- The pwlr graph representation: A persistent weisfeiler-lehman scheme with random walks for graph classification. In Topological, Algebraic and Geometric Learning Workshops 2022, pp. 287–297. PMLR, 2022.
- Paton, K. An algorithm for finding a fundamental set of cycles of a graph. Communications of the ACM, 12(9):514–518, 1969.
- Recipe for a general, powerful, scalable graph transformer, 2023.
- Walking out of the weisfeiler leman hierarchy: Graph learning beyond message passing, 2023.
- Attention is all you need, 2023.
- Graph attention networks, 2018.
- A new perspective on” how graph neural networks go beyond weisfeiler-lehman?”. In International Conference on Learning Representations, 2021.
- How powerful are graph neural networks?, 2019.
- Graphformers: Gnn-nested transformers for representation learning on textual graph, 2021.
- Do transformers really perform bad for graph representation?, 2021.
- Rethinking the expressive power of gnns via graph biconnectivity, 2023.
- Hierarchical graph transformer with adaptive node sampling. Advances in Neural Information Processing Systems, 35:21171–21183, 2022.
- From stars to subgraphs: Uplifting any gnn with local structure awareness, 2022.