Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Nearly-Linear Time Spectral Graph Reduction for Scalable Graph Partitioning and Data Visualization (1812.08942v1)

Published 21 Dec 2018 in cs.DS and cs.NA

Abstract: This paper proposes a scalable algorithmic framework for spectral reduction of large undirected graphs. The proposed method allows computing much smaller graphs while preserving the key spectral (structural) properties of the original graph. Our framework is built upon the following two key components: a spectrum-preserving node aggregation (reduction) scheme, as well as a spectral graph sparsification framework with iterative edge weight scaling. We show that the resulting spectrally-reduced graphs can robustly preserve the first few nontrivial eigenvalues and eigenvectors of the original graph Laplacian. In addition, the spectral graph reduction method has been leveraged to develop much faster algorithms for multilevel spectral graph partitioning as well as t-distributed Stochastic Neighbor Embedding (t-SNE) of large data sets. We conducted extensive experiments using a variety of large graphs and data sets, and obtained very promising results. For instance, we are able to reduce the "coPapersCiteseer" graph with 0.43 million nodes and 16 million edges to a much smaller graph with only 13K (32X fewer) nodes and 17K (950X fewer) edges in about 16 seconds; the spectrally-reduced graphs also allow us to achieve up to 1100X speedup for spectral graph partitioning and up to 60X speedup for t-SNE visualization of large data sets.

Citations (12)

Summary

We haven't generated a summary for this paper yet.