A Survey on Structure-Preserving Graph Transformers (2401.16176v1)
Abstract: The transformer architecture has shown remarkable success in various domains, such as natural language processing and computer vision. When it comes to graph learning, transformers are required not only to capture the interactions between pairs of nodes but also to preserve graph structures connoting the underlying relations and proximity between them, showing the expressive power to capture different graph structures. Accordingly, various structure-preserving graph transformers have been proposed and widely used for various tasks, such as graph-level tasks in bioinformatics and chemoinformatics. However, strategies related to graph structure preservation have not been well organized and systematized in the literature. In this paper, we provide a comprehensive overview of structure-preserving graph transformers and generalize these methods from the perspective of their design objective. First, we divide strategies into four main groups: node feature modulation, context node sampling, graph rewriting, and transformer architecture improvements. We then further divide the strategies according to the coverage and goals of graph structure preservation. Furthermore, we also discuss challenges and future directions for graph transformer models to preserve the graph structure and understand the nature of graphs.
- Graph representation learning and its applications: A survey. Sensors, 23(8), 2023.
- Revisiting over-smoothing and over-squashing using ollivier-ricci curvature. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), volume 202 of PMLR, pages 25956–25979, Honolulu, Hawaii, USA, 23-29 July 2023. PMLR.
- Mitigating degree biases in message passing mechanism by utilizing community structures. arXiv preprint, abs/2312.16788, 2023.
- Structure-aware transformer for graph representation learning. In Proceedings of the 39th International Conference on Machine Learning (ICML 2022), pages 3469–3489, Baltimore, Maryland, USA, 17–23 Jul 2022. PMLR.
- A generalization of transformer networks to graphs. In Proceedings of the AAAI Workshop on Deep Learning on Graphs (AAAI 2021), 8–9 Feb 2021.
- Transitivity-preserving graph representation learning for bridging local connectivity and role-based similarity. In Proceedings of the 38th Conference on Artificial Intelligence (AAAI 2024), Vancouver, BC, Canada, 22-25 Feb 2024. AAAI Press. To Appear.
- Day-ahead hourly solar irradiance forecasting based on multi-attributed spatio-temporal graph convolutional network. Sensors, 22(19):7179, 2022.
- Learning multi-resolution representations of research patterns in bibliographic networks. Journal of Informetrics, 15(1):101126, 2021.
- Companion animal disease diagnostics based on literal-aware medical knowledge graph representation learning. IEEE Access, 11:114238–114249, 2023.
- Plot structure decomposition in narrative multimedia by analyzing personalities of fictional characters. Applied Sciences, 11(4):1645, 2021.
- Connector 0.5: A unified framework for graph representation learning. arXiv preprint, abs/2304.13195, 2023.
- Coarformer: Transformer for large graph via graph coarsening. OpenReview.net, 2022.
- Global self-attention as a replacement for graph convolution. In Proceedings of the 28th Conference on Knowledge Discovery and Data Mining (KDD 2022), pages 655–665, Washington, DC, USA, 14- 18 Aug 2022. ACM.
- Rethinking graph transformers with spectral attention. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS 2021), pages 21618–21629, Virtual Event, Dec 2021.
- Recipe for a general, powerful, scalable graph transformer. In Proceedings of the 35th Annual Conference on Neural Information Processing Systems (NeurIPS 2022), 2022.
- Universal graph transformer self-attention networks. In Proceedings of the Companion Web Conference 2022 (WWW 2022), WWW ’22, page 193–196, Virtual Event, April 25–29 2022. ACM.
- Masked label prediction: Unified message passing model for semi-supervised classification. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI 2021), pages 1548–1554, Virtual Event, Aug 2021. ijcai.org.
- Representing long-range context for graph neural networks with global attention. In Proceedings of the 34th Neural Information Processing Systems (NeurIPS 2021), pages 13266–13279, Virtual Event, Dec 6-14, 2021 2021.
- Self-supervised graph transformer on large-scale molecular data. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS 2020), Virtual Event, December 6-12, 2020 2020.
- Do transformers really perform badly for graph representation? In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), pages 28877–28888, Virtual Event, Dec 2021.
- Deformable graph transformer. OpenReview.net, 2023.
- Random walk conformer: Learning graph representation from long and short range. In Proceedings of the 37th Conference on Artificial Intelligence (AAAI 2023), pages 10936–10944, Washington, DC, USA, Feb 7-14, 2023 2023. AAAI Press.
- Graphit: Encoding graph structure in transformers. arXiv preprint, abs/2106.05667, 2021.
- Pure transformers are powerful graph learners. In Proceedings of the 36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022), 28th November - 9th December 2022.
- Gophormer: Ego-graph transformer for node classification. arXiv preprint, abs/2110.13094, 2021.
- Rethinking the expressive power of gnns via graph biconnectivity. In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, May 1-5, 2023 2023. OpenReview.net.
- Grpe: Relative positional encoding for graph transformer. In Proceedings of the 39th International Conference on Machine Learning for Drug Discovery (MLDD workshop, ICLR 2022), Virtual Event, Apr 2022.
- Graph neural networks with learnable structural and positional representations. In Proceedings of the 10th International Conference on Learning Representations (ICLR 2022), Virtual Event, 25–29 Apr 2022. OpenReview.net.
- GOAT: A global transformer on large-scale graphs. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), volume 202 of Proceedings of Machine Learning Research, pages 17375–17390, Honolulu, Hawaii, USA, 23-29 July 2023. PMLR.
- Exphormer: Sparse transformers for graphs. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), volume 202 of PMLR, pages 31613–31632, Honolulu, Hawaii, USA, 23-29 July 2023 2023. PMLR.
- Are more layers beneficial to graph transformers? In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, May 1-5, 2023 2023. OpenReview.net.
- Hierarchical graph transformer with adaptive node sampling. In Proceedings of the 36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, Louisiana, 28th Nov- 9th Dec 2022.
- Nagphormer: A tokenized graph transformer for node classification in large graphs. In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, May 1-5, 2023 2023. OpenReview.net.
- Graph inductive biases in transformers without message passing. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), volume 202 of PMLR, pages 23321–23337, Honolulu, Hawaii, USA, 23-29 July 2023. PMLR.
- Agformer: Efficient graph representation with anchor-graph transformer. arXiv preprint, abs/2305.07521, 2023.
- Gapformer: Graph transformer with graph pooling for node classification. In Edith Elkind, editor, Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI-23), pages 2196–2205. IJCAI, Aug 2023.
- Sign and basis invariant networks for spectral graph representation learning. In Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, May 1-5, 2023 2023. OpenReview.net.
- Benchmarking graph neural networks. Journal of Machine Learning Research, 24:43:1–43:48, 2023.
- Story embedding: Learning distributed representations of stories based on character networks (extended abstract). In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pages 5070–5074, Yokohama, Japan, Jan 2020. ijcai.org.
- A survey on efficient training of transformers. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), 19th-25th August 2023, pages 6823–6831, Macao, China, 2023. ijcai.org.
- Van Thuy Hoang (10 papers)
- O-Joun Lee (10 papers)