Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Propagation Transformer for Graph Representation Learning (2305.11424v3)

Published 19 May 2023 in cs.LG and cs.AI

Abstract: This paper presents a novel transformer architecture for graph representation learning. The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks. Specifically, we propose a new attention mechanism called Graph Propagation Attention (GPA). It explicitly passes the information among nodes and edges in three ways, i.e. node-to-node, node-to-edge, and edge-to-node, which is essential for learning graph-structured data. On this basis, we design an effective transformer architecture named Graph Propagation Transformer (GPTrans) to further help learn graph data. We verify the performance of GPTrans in a wide range of graph learning experiments on several benchmark datasets. These results show that our method outperforms many state-of-the-art transformer-based graph models with better performance. The code will be released at https://github.com/czczup/GPTrans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhe Chen (237 papers)
  2. Hao Tan (80 papers)
  3. Tao Wang (700 papers)
  4. Tianrun Shen (3 papers)
  5. Tong Lu (85 papers)
  6. Qiuying Peng (13 papers)
  7. Cheng Cheng (188 papers)
  8. Yue Qi (25 papers)
Citations (9)
Github Logo Streamline Icon: https://streamlinehq.com