Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations (2306.10759v5)

Published 19 Jun 2023 in cs.LG, cs.AI, and cs.SI

Abstract: Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points. Transformers, as an emerging class of foundation encoders for graph-structured data, have shown promising performance on small graphs due to its global attention capable of capturing all-pair influence beyond neighboring nodes. Even so, existing approaches tend to inherit the spirit of Transformers in language and vision tasks, and embrace complicated models by stacking deep multi-head attentions. In this paper, we critically demonstrate that even using a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks where node numbers range from thousand-level to billion-level. This encourages us to rethink the design philosophy for Transformers on large graphs, where the global attention is a computation overhead hindering the scalability. We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model that can efficiently propagate information among arbitrary nodes in one layer. SGFormer requires none of positional encodings, feature/graph pre-processing or augmented loss. Empirically, SGFormer successfully scales to the web-scale graph ogbn-papers100M and yields up to 141x inference acceleration over SOTA Transformers on medium-sized graphs. Beyond current results, we believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Qitian Wu (29 papers)
  2. Wentao Zhao (20 papers)
  3. Chenxiao Yang (16 papers)
  4. Hengrui Zhang (38 papers)
  5. Fan Nie (13 papers)
  6. Haitian Jiang (9 papers)
  7. Yatao Bian (60 papers)
  8. Junchi Yan (241 papers)
Citations (44)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets