Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Partial Multiplex Network Embedding (2203.02656v1)

Published 5 Mar 2022 in cs.LG and cs.SI

Abstract: Network embedding is an effective technique to learn the low-dimensional representations of nodes in networks. Real-world networks are usually with multiplex or having multi-view representations from different relations. Recently, there has been increasing interest in network embedding on multiplex data. However, most existing multiplex approaches assume that the data is complete in all views. But in real applications, it is often the case that each view suffers from the missing of some data and therefore results in partial multiplex data. In this paper, we present a novel Deep Partial Multiplex Network Embedding approach to deal with incomplete data. In particular, the network embeddings are learned by simultaneously minimizing the deep reconstruction loss with the autoencoder neural network, enforcing the data consistency across views via common latent subspace learning, and preserving the data topological structure within the same network through graph Laplacian. We further prove the orthogonal invariant property of the learned embeddings and connect our approach with the binary embedding techniques. Experiments on four multiplex benchmarks demonstrate the superior performance of the proposed approach over several state-of-the-art methods on node classification, link prediction and clustering tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Qifan Wang (129 papers)
  2. Yi Fang (151 papers)
  3. Anirudh Ravula (6 papers)
  4. Ruining He (14 papers)
  5. Bin Shen (45 papers)
  6. Jingang Wang (71 papers)
  7. Xiaojun Quan (52 papers)
  8. Dongfang Liu (44 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.