Sparse Decomposition of Graph Neural Networks
Abstract: Graph Neural Networks (GNN) exhibit superior performance in graph representation learning, but their inference cost can be high, due to an aggregation operation that can require a memory fetch for a very large number of nodes. This inference cost is the major obstacle to deploying GNN models with \emph{online prediction} to reflect the potentially dynamic node features. To address this, we propose an approach to reduce the number of nodes that are included during aggregation. We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features of a carefully selected subset of nodes within the extended neighbourhood. The approach achieves linear complexity with respect to the average node degree and the number of layers in the graph neural network. We introduce an algorithm to compute the optimal parameters for the sparse decomposition, ensuring an accurate approximation of the original GNN model, and present effective strategies to reduce the training time and improve the learning process. We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup, achieving significant accuracy gains with comparable inference times for both node classification and spatio-temporal forecasting tasks.
- Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
- Scaling graph neural networks with approximate pagerank. In Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Min., 2020.
- Fastgcn: Fast learning with graph convolutional networks via importance sampling. In Proc. Int. Conf. Learn. Representations, 2018.
- On graph neural networks versus graph-augmented mlps. In Proc. Int. Conf. Learn. Representations, 2021.
- Are powerful graph neural nets necessary? a dissection on graph classification. Int. Conf. Learn. Representations RLGM Workshop, 2019.
- Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Min., 2019.
- Personalized recommendation on dynamic content using predictive bilinear models. In Proc. Int. Conf. World Wide Web., 2009.
- Daniel Crankshaw. The Design and Implementation of Low-Latency Prediction Serving Systems. Univ. of California, Berkeley, 2019.
- Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition. Exp. Fluids, 57:1–19, 2016.
- Vq-gnn: A universal framework to scale up graph neural networks using vector quantization. In Adv. Neural Inf. Process. Syst., 2021.
- A comprehensive study on large-scale graph training: Benchmarking and rethinking. In Adv. Neural Inf. Process. Syst., 2022.
- Least angle regression. Ann. Statist., 32(2):407–499, 2004.
- On the equivalence between temporal and static equivariant graph representations. In Proc. Int. Conf. Mach. Learn., 2022.
- Efficient graph neural network inference at large scale. arXiv preprint arXiv:2211.00495, 2022.
- Learning dynamics and heterogeneity of spatial-temporal graph data for traffic forecasting. IEEE Trans. on Knowl. and Data Eng., 34(11):5415–5428, 2021.
- Inductive representation learning on large graphs. In Adv. Neural Inf. Process. Syst., 2017.
- Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proc. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2020.
- Distilling the knowledge in a neural network. Stat, 1050:9, 2015.
- Open graph benchmark: Datasets for machine learning on graphs. In Adv. Neural Inf. Process. Syst., 2020.
- Graph-mlp: Node classification without message passing in graph. arXiv preprint arXiv:2106.04051, 2021.
- Training graph neural networks with 1000 layers. In Proc. Int. Conf. Mach. Learn., 2021.
- Fine-grained learning behavior-oriented knowledge distillation for graph neural networks. IEEE Trans. Neural Netw. Learn. Syst., 2024.
- Graph Inductive Biases in Transformers without Message Passing. In Proc. Int. Conf. Mach. Learn., 2023.
- Transformer for graphs: An overview from architecture perspective. arXiv preprint arXiv:2202.08455, 2022.
- Revisiting graph neural networks: All we have is low-pass filters. arXiv preprint arXiv:1905.09550, 2019.
- Geom-gcn: Geometric graph convolutional networks. In Proc. Int. Conf. Learn. Representations, 2020.
- Gmnn: Graph markov neural networks. In Proc. Int. Conf. Mach. Learn., 2019.
- Recipe for a general, powerful, scalable graph transformer. In Proc. Adv. Neural Inf. Process. Syst., 2022.
- Sign: Scalable inception graph neural networks. arXiv preprint arXiv:2004.11198, 2020.
- Pitfalls of graph neural network evaluation. In Relat. Represent. Learn. Workshop NeurIPS, 2018.
- Exphormer: Sparse transformers for graphs. In Proc. Int. Conf. Mach. Learn., 2023.
- Learning to hash with graph neural networks for recommender systems. In Proc. Int. Web Conf., 2020.
- Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency. In Proc. Int. Conf. Learn. Representations, 2023.
- Decoupled graph knowledge distillation: A general logits-based method for learning mlps on graphs. Neural Netw., 179:106567, 2024.
- Graph attention networks. In Proc. Int. Conf. Learn. Representations, 2018.
- Unifying graph convolutional neural networks and label propagation. In Proc. Int. Conf. Learn. Representations, 2021.
- Graph explicit neural networks: Explicitly encoding graphs for efficient and accurate inference. In Proc. Int. Conf. on Web Search and Data Mining, 2023.
- Frank Wilcoxon. Individual comparisons by ranking methods. In Breakthroughs in Statistics: Methodology and distribution, pp. 196–202. Springer, 1992.
- Classifying nodes in graphs without gnns. arXiv preprint arXiv:2402.05934, 2024.
- Simplifying graph convolutional networks. In Proc. Int. Conf. Learn. Representations, 2019.
- Quantifying the knowledge in gnns for reliable distillation into mlps. In Proc. Int. Conf. Mach. Learn., 2023.
- A teacher-free graph knowledge distillation framework with dual self-distillation. IEEE Trans. on Knowl. and Data Eng., 36(9):4375–4385, 2024.
- A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst., 32(1):4–24, 2020.
- How powerful are graph neural networks? In Proc. Int. Conf. Learn. Representations, 2019.
- Tinygnn: Learning efficient graph neural networks. In Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Min., 2020.
- Graph transformer networks. In Adv. Neural Inf. Process. Syst., 2019.
- Graphsaint: Graph sampling based inductive learning method. In Proc. Int. Conf. Learn. Representations, 2020.
- Drgcn: Dynamic evolving initial residual for deep graph convolutional networks. In Proc. AAAI Conf. Artif. Intell., 2023.
- Graph-less neural networks: Teaching old mlps new tricks via distillation. In Proc. Int. Conf. Learn. Representations, 2021.
- Learned low precision graph neural networks. arXiv preprint arXiv:2009.09232, 2020.
- Accelerating large scale real-time gnn inference using channel pruning. In Proc. Conf. Very Large Data Bases Endowment, 2021.
- Graph neural networks: A review of methods and applications. AI Open, 1:57–81, 2020.
- Layer-dependent importance sampling for training deep and large graph convolutional networks. In Adv. Neural Inf. Process. Syst., 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.