Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Billion-scale Network Embedding with Iterative Random Projection (1805.02396v2)

Published 7 May 2018 in cs.SI, cs.LG, and stat.ML

Abstract: Network embedding, which learns low-dimensional vector representation for nodes in the network, has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE (Iterative Random Projection Network Embedding), a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We also design a dynamic updating procedure which can efficiently incorporate the dynamic changes of the networks without error aggregation. Extensive experimental results demonstrate the efficiency and efficacy of RandNE over state-of-the-art methods in several tasks including network reconstruction, link prediction and node classification on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ziwei Zhang (40 papers)
  2. Peng Cui (116 papers)
  3. Haoyang Li (95 papers)
  4. Xiao Wang (508 papers)
  5. Wenwu Zhu (104 papers)
Citations (76)

Summary

We haven't generated a summary for this paper yet.