Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks (2003.13606v11)

Published 30 Mar 2020 in cs.LG and stat.ML

Abstract: Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets. They need to compute node representations recursively from their neighbors. Current GCN training algorithms suffer from either high computational costs that grow exponentially with the number of layers, or high memory usage for loading the entire graph and node embeddings. In this paper, we propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training, hence greatly reducing time and memory complexities. We present theoretical analysis for L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful GCNs as the more costly conventional training algorithm does, under mild conditions. We further propose L$2$-GCN, which learns a controller for each layer that can automatically adjust the training epochs per layer in L-GCN. Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size, while maintaining comparable prediction performance. With the learned controller, L$2$-GCN can further cut the training time in half. Our codes are available at https://github.com/Shen-Lab/L2-GCN.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuning You (10 papers)
  2. Tianlong Chen (202 papers)
  3. Zhangyang Wang (375 papers)
  4. Yang Shen (98 papers)
Citations (79)

Summary

We haven't generated a summary for this paper yet.