Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified Lottery Ticket Hypothesis for Graph Neural Networks (2102.06790v2)

Published 12 Feb 2021 in cs.LG, cs.AI, and stat.ML

Abstract: With graphs rapidly growing in size and deeper graph neural networks (GNNs) emerging, the training and inference of GNNs become increasingly expensive. Existing network weight pruning algorithms cannot address the main space and computational bottleneck in GNNs, caused by the size and connectivity of the graph. To this end, this paper first presents a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights, for effectively accelerating GNN inference on large-scale graphs. Leveraging this new tool, we further generalize the recently popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network, which can be jointly identified from the original GNN and the full dense graph by iteratively applying UGS. Like its counterpart in convolutional neural networks, GLT can be trained in isolation to match the performance of training with the full model and graph, and can be drawn from both randomly initialized and self-supervised pre-trained GNNs. Our proposal has been experimentally verified across various GNN architectures and diverse tasks, on both small-scale graph datasets (Cora, Citeseer and PubMed), and large-scale datasets from the challenging Open Graph Benchmark (OGB). Specifically, for node classification, our found GLTs achieve the same accuracies with 20%~98% MACs saving on small graphs and 25%~85% MACs saving on large ones. For link prediction, GLTs lead to 48%~97% and 70% MACs saving on small and large graph datasets, respectively, without compromising predictive performance. Codes available at https://github.com/VITA-Group/Unified-LTH-GNN.

An Examination of a Unified Lottery Ticket Hypothesis for Graph Neural Networks

The paper "A Unified Lottery Ticket Hypothesis for Graph Neural Networks" by Chen et al. presents a significant contribution to the efficient training and inference of Graph Neural Networks (GNNs). Given the increasing complexity and computational demands of GNNs due to large-scale graph data, the authors propose a Unified GNN Sparsification (UGS) framework that aims to address the computational challenges posed by large graphs and deep neural models. This holistic framework targets both the model weights and the graph structure, offering a novel approach to accelerating GNN inference through sparsification.

Key Technical Contributions

  1. Unified GNN Sparsification (UGS): The core innovation of the paper is UGS, which jointly prunes the adjacency matrix of the graph and the GNN model weights. This dual-focus allows for a more comprehensive reduction of computational load compared to traditional pruning methods that typically focus solely on model weights. The UGS framework is versatile and algebraically elegant, making no assumptions on the GNN architecture or the graph's inherent structure, allowing it to be broadly applicable.
  2. Graph Lottery Ticket Hypothesis (GLT): Building upon the Lottery Ticket Hypothesis (LTH) traditionally associated with overparameterized models such as Convolutional Neural Networks (CNNs), this paper extends the hypothesis to GNNs. By defining a graph lottery ticket as an optimal pair of a core sub-dataset and a sparse model subnetwork, the authors demonstrate that these configurations can be trained to achieve performance comparable to the original denser networks, both in contexts where models are randomly initialized and from self-supervised pre-trained states.
  3. Empirical Validation: The authors provide experimental evidence across a variety of GNN architectures and datasets ranging from small graphs like Cora and Citeseer to larger ones from the Open Graph Benchmark (OGB). The results illustrate that the GLT framework can achieve the same accuracies with significant reductions in multiply-accumulate operations (MACs), indicating the potential for considerable efficiency improvements. For node classification tasks, savings of 20%-98% in MACs were achieved, and for link prediction, savings ranged from 48%-97%.

Implications and Future Directions

The practical implications of this work are notable, especially in environments where computational resources are constrained. Efficient GNNs enabled by UGS and GLT have the potential to be deployed in edge computing scenarios, real-time inference tasks, and other contexts where speed and resource efficiency are paramount.

Theoretically, this work opens new research avenues into the nature of sparsity in graph-centered data structures and its impact on learning models. The generalization of LTH to GNNs invites further exploration into the universality of lottery tickets across different neural network paradigms.

One of the promising future directions could be investigating the GLT's impact on more advanced GNN models and exploring hybrid strategies that combine other sparsification techniques like quantization or low-rank approximations. Additionally, adapting this framework to enhance explainability in GNNs could bring intriguing insights, particularly in understanding how sparse graph structures relate to the dynamics of graph data.

In conclusion, this paper presents a methodologically robust framework that expands the landscape of computational efficiency in GNNs, offering substantial evidence for the applicability of the Lottery Ticket Hypothesis in graph domains. This work is a pivotal step toward optimizing GNNs for scalable, real-world applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tianlong Chen (202 papers)
  2. Yongduo Sui (14 papers)
  3. Xuxi Chen (20 papers)
  4. Aston Zhang (48 papers)
  5. Zhangyang Wang (374 papers)
Citations (158)
Youtube Logo Streamline Icon: https://streamlinehq.com