Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RRLFSOR: An Efficient Self-Supervised Learning Strategy of Graph Convolutional Networks (2108.07481v2)

Published 17 Aug 2021 in cs.LG

Abstract: Graph Convolutional Networks (GCNs) are widely used in many applications yet still need large amounts of labelled data for training. Besides, the adjacency matrix of GCNs is stable, which makes the data processing strategy cannot efficiently adjust the quantity of training data from the built graph structures.To further improve the performance and the self-learning ability of GCNs,in this paper, we propose an efficient self-supervised learning strategy of GCNs,named randomly removed links with a fixed step at one region (RRLFSOR).RRLFSOR can be regarded as a new data augmenter to improve over-smoothing.RRLFSOR is examined on two efficient and representative GCN models with three public citation network datasets-Cora,PubMed,and Citeseer.Experiments on transductive link prediction tasks show that our strategy outperforms the baseline models consistently by up to 21.34% in terms of accuracy on three benchmark datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Feng Sun (34 papers)
  2. Ajith Kumar V (1 paper)
  3. Guanci Yang (1 paper)
  4. Qikui Zhu (10 papers)
  5. Yiyun Zhang (4 papers)
  6. Ansi Zhang (2 papers)
  7. Dhruv Makwana (5 papers)

Summary

We haven't generated a summary for this paper yet.