Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Degeneracy Framework for Scalable Graph Autoencoders (1902.08813v3)

Published 23 Feb 2019 in cs.LG, cs.SI, and stat.ML

Abstract: In this paper, we present a general framework to scale graph autoencoders (AE) and graph variational autoencoders (VAE). This framework leverages graph degeneracy concepts to train models only from a dense subset of nodes instead of using the entire graph. Together with a simple yet effective propagation mechanism, our approach significantly improves scalability and training speed while preserving performance. We evaluate and discuss our method on several variants of existing graph AE and VAE, providing the first application of these models to large graphs with up to millions of nodes and edges. We achieve empirically competitive results w.r.t. several popular scalable node embedding methods, which emphasizes the relevance of pursuing further research towards more scalable graph AE and VAE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Guillaume Salha (9 papers)
  2. Romain Hennequin (46 papers)
  3. Viet Anh Tran (4 papers)
  4. Michalis Vazirgiannis (116 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.