Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Scalable and Effective Alternative to Graph Transformers (2406.12059v1)

Published 17 Jun 2024 in cs.LG and cs.SI

Abstract: Graph Neural Networks (GNNs) have shown impressive performance in graph representation learning, but they face challenges in capturing long-range dependencies due to their limited expressive power. To address this, Graph Transformers (GTs) were introduced, utilizing self-attention mechanism to effectively model pairwise node relationships. Despite their advantages, GTs suffer from quadratic complexity w.r.t. the number of nodes in the graph, hindering their applicability to large graphs. In this work, we present Graph-Enhanced Contextual Operator (GECO), a scalable and effective alternative to GTs that leverages neighborhood propagation and global convolutions to effectively capture local and global dependencies in quasilinear time. Our study on synthetic datasets reveals that GECO reaches 169x speedup on a graph with 2M nodes w.r.t. optimized attention. Further evaluations on diverse range of benchmarks showcase that GECO scales to large graphs where traditional GTs often face memory and time limitations. Notably, GECO consistently achieves comparable or superior quality compared to baselines, improving the SOTA up to 4.5%, and offering a scalable and effective solution for large-scale graph learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kaan Sancak (4 papers)
  2. Zhigang Hua (15 papers)
  3. Jin Fang (23 papers)
  4. Yan Xie (18 papers)
  5. Andrey Malevich (9 papers)
  6. Bo Long (60 papers)
  7. Ümit V. Çatalyürek (27 papers)
  8. Muhammed Fatih Balin (4 papers)

Summary

We haven't generated a summary for this paper yet.