Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Network Embedding via Deep Prediction Model (2104.13323v1)

Published 27 Apr 2021 in cs.LG

Abstract: Network-structured data becomes ubiquitous in daily life and is growing at a rapid pace. It presents great challenges to feature engineering due to the high non-linearity and sparsity of the data. The local and global structure of the real-world networks can be reflected by dynamical transfer behaviors among nodes. This paper proposes a network embedding framework to capture the transfer behaviors on structured networks via deep prediction models. We first design a degree-weight biased random walk model to capture the transfer behaviors on the network. Then a deep network embedding method is introduced to preserve the transfer possibilities among the nodes. A network structure embedding layer is added into conventional deep prediction models, including Long Short-Term Memory Network and Recurrent Neural Network, to utilize the sequence prediction ability. To keep the local network neighborhood, we further perform a Laplacian supervised space optimization on the embedding feature representations. Experimental studies are conducted on various datasets including social networks, citation networks, biomedical network, collaboration network and language network. The results show that the learned representations can be effectively used as features in a variety of tasks, such as clustering, visualization, classification, reconstruction and link prediction, and achieve promising performance compared with state-of-the-arts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xin Sun (151 papers)
  2. Zenghui Song (1 paper)
  3. Yongbo Yu (5 papers)
  4. Junyu Dong (116 papers)
  5. Claudia Plant (29 papers)
  6. Christian Boehm (5 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.