Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding (2002.01910v5)

Published 5 Feb 2020 in cs.LG, cs.SI, and stat.ML

Abstract: Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node embedding methods, but suffer from scalability issues. In this paper, we introduce FastGAE, a general framework to scale graph AE and VAE to large graphs with millions of nodes and edges. Our strategy, based on an effective stochastic subgraph decoding scheme, significantly speeds up the training of graph AE and VAE while preserving or even improving performances. We demonstrate the effectiveness of FastGAE on various real-world graphs, outperforming the few existing approaches to scale graph AE and VAE by a wide margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Guillaume Salha (9 papers)
  2. Romain Hennequin (47 papers)
  3. Jean-Baptiste Remy (2 papers)
  4. Manuel Moussallam (18 papers)
  5. Michalis Vazirgiannis (116 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.