Papers
Topics
Authors
Recent
Search
2000 character limit reached

Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks

Published 17 Nov 2019 in cs.LG, cs.SI, and stat.ML | (1911.07323v1)

Abstract: Graph convolutional networks (GCNs) have recently received wide attentions, due to their successful applications in different graph tasks and different domains. Training GCNs for a large graph, however, is still a challenge. Original full-batch GCN training requires calculating the representation of all the nodes in the graph per GCN layer, which brings in high computation and memory costs. To alleviate this issue, several sampling-based methods have been proposed to train GCNs on a subset of nodes. Among them, the node-wise neighbor-sampling method recursively samples a fixed number of neighbor nodes, and thus its computation cost suffers from exponential growing neighbor size; while the layer-wise importance-sampling method discards the neighbor-dependent constraints, and thus the nodes sampled across layer suffer from sparse connection problem. To deal with the above two problems, we propose a new effective sampling algorithm called LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, constructs a bipartite subgraph and computes the importance probability accordingly. Then, it samples a fixed number of nodes by the calculated probability, and recursively conducts such procedure per layer to construct the whole computation graph. We prove theoretically and experimentally, that our proposed sampling algorithm outperforms the previous sampling methods in terms of both time and memory costs. Furthermore, LADIES is shown to have better generalization accuracy than original full-batch GCN, due to its stochastic nature.

Citations (260)

Summary

  • The paper introduces LADIES, a novel algorithm that dynamically adjusts sampling probabilities across layers to enhance computational efficiency and model accuracy.
  • It constructs bipartite subgraphs and recursively builds complete computation graphs, reducing memory usage compared to full-batch GCN training.
  • Experimental benchmarks show that LADIES outperforms methods like GraphSAGE and FastGCN in convergence speed and generalization performance.

Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks

The paper introduces LADIES, a novel layer-dependent importance sampling algorithm specifically tailored for training extensive and intricate Graph Convolutional Networks (GCNs). The increasing popularity of GCNs, due to their advantageous application across multiple graph-related tasks, brings with it computational challenges, especially when dealing with large-scale graphs. Traditional full-batch GCN training approaches are computationally intensive, requiring significant memory and processing power as they compute representations for all nodes within a network layer. This issue has given rise to the development of sampling-based techniques aimed at mitigating these resource requirements.

Existing sampling techniques, such as node-wise neighbor sampling and layer-wise importance sampling, present their respective shortcomings. Node-wise sampling suffers from an exponential growth in the number of neighbor nodes per layer, leading to increased computational demands. Conversely, layer-wise methods tend to lose efficacy in sparse graphs due to the absence of neighbor-dependent constraints. LADIES endeavors to address these concerns using a layer-specific sampling approach that maximizes computational efficiency while retaining model accuracy.

The LADIES algorithm operates by selecting nodes from upper layers and sampling their neighborhood nodes. Through the formation of a bipartite subgraph and calculation of importance probabilities, LADIES facilitates a recursive sampling process that efficiently constructs a complete computation graph. Theoretical and experimental benchmarks reveal that LADIES surpasses previous methodologies in both memory and time efficiency. Moreover, due to its stochastic characteristics, the algorithm achieves superior generalization accuracy over the original full-batch GCN.

Advancing beyond conventional sampling approaches such as those employed in GraphSAGE and FastGCN, LADIES calibrates sampling probabilities dynamically across layers, resulting in a denser computational graph. This confers a dual advantage of reduced sampling complexity and enhanced convergence performance. The layer-dependent importance sampling underpins LADIES’ ability to precisely calibrate the probability of selecting relevant nodes, reducing computational overhead while concurrently ensuring comprehensive graph coverage.

This research holds foundational implications for both theoretical and practical realms of GCN optimization. The reduced memory and processing demands align with the burgeoning requirements of applications that utilize large-scale graph data. From a theoretical standpoint, LADIES enriches the body of knowledge surrounding efficient deep learning model training within the graph data domain. Looking forward, the methodologies established by LADIES may inspire subsequent advances in sampling strategies, potentially expanding into broader applications beyond graph convolutional networks.

In conclusion, LADIES marks a pivotal development in optimizing the training of GCNs, addressing critical computational bottlenecks while fostering enhanced generalization capabilities. Its focus on layer-dependent sampling points towards new directions in both algorithmic development and the application of GCNs in data-intensive environments.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.