Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Task Representation Learning with Multi-View Graph Convolutional Networks (2103.02236v1)

Published 3 Mar 2021 in cs.SI

Abstract: Link prediction and node classification are two important downstream tasks of network representation learning. Existing methods have achieved acceptable results but they perform these two tasks separately, which requires a lot of duplication of work and ignores the correlations between tasks. Besides, conventional models suffer from the identical treatment of information of multiple views, thus they fail to learn robust representation for downstream tasks. To this end, we tackle link prediction and node classification problems simultaneously via multi-task multi-view learning in this paper. We first explain the feasibility and advantages of multi-task multi-view learning for these two tasks. Then we propose a novel model named as MT-MVGCN to perform link prediction and node classification tasks simultaneously. More specifically, we design a multi-view graph convolutional network to extract abundant information of multiple views in a network, which is shared by different tasks. We further apply two attention mechanisms: view attention mechanism and task attention mechanism to make views and tasks adjust the view fusion process. Moreover, view reconstruction can be introduced as an auxiliary task to boost the performance of the proposed model. Experiments on real-world network datasets demonstrate that our model is efficient yet effective, and outperforms advanced baselines in these two tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hong Huang (56 papers)
  2. Yu Song (155 papers)
  3. Yao Wu (20 papers)
  4. Jia Shi (45 papers)
  5. Xia Xie (8 papers)
  6. Hai Jin (83 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.