A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-Distillation (2403.03483v1)
Abstract: Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for such an academic-industry gap is the neighborhood-fetching latency incurred by data dependency in GNNs. To reduce their gaps, Graph Knowledge Distillation (GKD) is proposed, usually based on a standard teacher-student architecture, to distill knowledge from a large teacher GNN into a lightweight student GNN or MLP. However, we found in this paper that neither teachers nor GNNs are necessary for graph knowledge distillation. We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference. More importantly, the proposed TGS framework is purely based on MLPs, where structural information is only implicitly used to guide dual knowledge self-distillation between the target node and its neighborhood. As a result, TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with dual self-distillation, e.g., TGS improves over vanilla MLPs by 15.54% on average and outperforms state-of-the-art GKD algorithms on six real-world datasets. In terms of inference speed, TGS infers 75X-89X faster than existing GNNs and 16X-25X faster than classical inference acceleration methods.
- Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, 2020.
- J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI Open, vol. 1, pp. 57–81, 2020.
- L. Wu, H. Lin, Z. Liu, Z. Liu, Y. Huang, and S. Z. Li, “Homophily-enhanced self-supervision for graph structure learning: Insights and directions,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
- L. Wu, J. Xia, Z. Gao, H. Lin, C. Tan, and S. Z. Li, “Graphmixup: Improving class-imbalanced node classification by reinforcement mixup and self-supervised context prediction,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2022, pp. 519–535.
- L. Wu, H. Lin, B. Hu, C. Tan, Z. Gao, Z. Liu, and S. Z. Li, “Beyond homophily and homogeneity assumption: Relation-based frequency adaptive graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
- H. Lin, Z. Gao, Y. Xu, L. Wu, L. Li, and S. Z. Li, “Conditional local convolution for spatio-temporal meteorological forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, 2022, pp. 7470–7478.
- M. Ma, P. Xie, F. Teng, B. Wang, S. Ji, J. Zhang, and T. Li, “Histgnn: Hierarchical spatio-temporal graph neural network for weather forecasting,” Information Sciences, vol. 648, p. 119580, 2023.
- L. Wu, Y. Tian, Y. Huang, S. Li, H. Lin, N. V. Chawla, and S. Li, “MAPE-PPI: Towards effective and efficient protein-protein interaction prediction via microenvironment-aware protein embedding,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=itGkF993gz
- L. Wu, Y. Huang, C. Tan, Z. Gao, H. Lin, B. Hu, Z. Liu, and S. Z. Li, “Psc-cpi: Multi-scale protein sequence-structure contrasting for efficient and generalizable compound-protein interaction prediction,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2024.
- F. Zhou, Q. Yang, T. Zhong, D. Chen, and N. Zhang, “Variational graph neural networks for road traffic prediction in intelligent transportation systems,” IEEE Transactions on Industrial Informatics, vol. 17, no. 4, pp. 2802–2812, 2020.
- W. Jiang and J. Luo, “Graph neural network for traffic forecasting: A survey,” Expert Systems with Applications, vol. 207, p. 117921, 2022.
- Z. Jia, S. Lin, R. Ying, J. You, J. Leskovec, and A. Aiken, “Redundancy-free computation for graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 997–1005.
- T. N. Kipf and M. Welling, “Variational graph auto-encoders,” arXiv preprint arXiv:1611.07308, 2016.
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
- W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in neural information processing systems, 2017, pp. 1024–1034.
- G. Li, M. Müller, B. Ghanem, and V. Koltun, “Training graph neural networks with 1000 layers,” arXiv preprint arXiv:2106.07476, 2021.
- B. Yan, C. Wang, G. Guo, and Y. Lou, “Tinygnn: Learning efficient graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 1848–1856.
- H. He, J. Wang, Z. Zhang, and F. Wu, “Compressing deep graph neural networks via adversarial knowledge distillation,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 534–544.
- H. Zhang, S. Lin, W. Liu, P. Zhou, J. Tang, X. Liang, and E. P. Xing, “Iterative graph self-distillation,” IEEE Transactions on Knowledge and Data Engineering, 2023.
- F. M. Nardini, C. Rulli, S. Trani, and R. Venturini, “Distilled neural networks for efficient learning to rank,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 5, pp. 4695–4712, 2022.
- W. Liu, D. Gong, M. Tan, J. Q. Shi, Y. Yang, and A. G. Hauptmann, “Learning distilled graph for large-scale social network data clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 7, pp. 1393–1404, 2019.
- Y. Yang, J. Qiu, M. Song, D. Tao, and X. Wang, “Distilling knowledge from graph convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7074–7083.
- Y. Ren, J. Ji, L. Niu, and M. Lei, “Multi-task self-distillation for graph-based semi-supervised learning,” arXiv preprint arXiv:2112.01174, 2021.
- L. Wu, H. Lin, Y. Huang, and S. Z. Li, “Knowledge distillation improves graph structure augmentation for graph neural networks,” in Advances in Neural Information Processing Systems, 2022.
- S. Zhang, Y. Liu, Y. Sun, and N. Shah, “Graph-less neural networks: Teaching old mlps new tricks via distillation,” arXiv preprint arXiv:2110.08727, 2021.
- C. Yang, J. Liu, and C. Shi, “Extract the knowledge of graph neural networks and go beyond it: An effective knowledge distillation framework,” in Proceedings of the Web Conference 2021, 2021, pp. 1227–1237.
- L. Wu, H. Lin, Y. Huang, T. Fan, and S. Z. Li, “Extracting low-/high- frequency knowledge from graph neural networks and injecting it into mlps: An effective gnn-to-mlp distillation framework,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2023.
- Anonymous, “Double wins: Boosting accuracy and efficiency of graph neural networks by reliable knowledge distillation,” in Submitted to The Eleventh International Conference on Learning Representations, 2023, under review. [Online]. Available: https://openreview.net/forum?id=NGIFt6BNvLe
- L. Wu, H. Lin, Y. Huang, and S. Z. Li, “Quantifying the knowledge in gnns for reliable distillation into mlps,” arXiv preprint arXiv:2306.05628, 2023.
- Y. Chen, Y. Bian, X. Xiao, Y. Rong, T. Xu, and J. Huang, “On self-distilling graph neural network,” arXiv preprint arXiv:2011.02255, 2020.
- W. Zhang, X. Miao, Y. Shao, J. Jiang, L. Chen, O. Ruas, and B. Cui, “Reliable data distillation on graph convolutional network,” in Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, 2020, pp. 1399–1414.
- Y. Hu, H. You, Z. Wang, Z. Wang, E. Zhou, and Y. Gao, “Graph-mlp: Node classification without message passing in graph,” arXiv preprint arXiv:2106.04051, 2021.
- Y. Luo, A. Chen, K. Yan, and L. Tian, “Distilling self-knowledge from contrastive links to classify graph nodes without passing messages,” arXiv preprint arXiv:2106.08541, 2021.
- M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” arXiv preprint arXiv:1606.09375, 2016.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
- J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” arXiv preprint arXiv:1810.05997, 2018.
- Z. Zhang, P. Cui, and W. Zhu, “Deep learning on graphs: A survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 1, pp. 249–270, 2020.
- S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” arXiv preprint arXiv:1506.02626, 2015.
- S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision,” in International conference on machine learning. PMLR, 2015, pp. 1737–1746.
- J. Chen, T. Ma, and C. Xiao, “Fastgcn: fast learning with graph convolutional networks via importance sampling,” arXiv preprint arXiv:1801.10247, 2018.
- X. Xu, F. Zhou, K. Zhang, and S. Liu, “Ccgl: Contrastive cascade graph learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 5, pp. 4539–4554, 2022.
- K. Feng, C. Li, Y. Yuan, and G. Wang, “Freekd: Free-direction knowledge distillation for graph neural networks,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 357–366.
- A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Label propagation for deep semi-supervised learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5070–5079.
- L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, “Be your own teacher: Improve the performance of convolutional neural networks via self distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3713–3722.
- Y. Tian, S. Pei, X. Zhang, C. Zhang, and N. V. Chawla, “Knowledge distillation on graphs: A survey,” arXiv preprint arXiv:2302.00219, 2023.
- L. Wu, H. Lin, C. Tan, Z. Gao, and S. Z. Li, “Self-supervised learning on graphs: Contrastive, generative, or predictive,” IEEE Transactions on Knowledge and Data Engineering, 2021.
- P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax.” in ICLR (Poster), 2019.
- Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
- Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” arXiv preprint arXiv:2010.14945, 2020.
- D. Xu, W. Cheng, D. Luo, H. Chen, and X. Zhang, “Infogcl: Information-aware graph contrastive learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 30 414–30 425, 2021.
- J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in neural information processing systems, vol. 33, pp. 21 271–21 284, 2020.
- S. Thakoor, C. Tallec, M. G. Azar, R. Munos, P. Veličković, and M. Valko, “Bootstrapped representation learning on graphs,” in ICLR 2021 Workshop on Geometrical and Topological Representation Learning, 2021.
- M. Gutmann and A. Hyvärinen, “Noise-contrastive estimation: A new estimation principle for unnormalized statistical models,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 2010, pp. 297–304.
- Z. T. Kefato and S. Girdzijauskas, “Self-supervised graph neural networks without explicit negative sampling,” arXiv preprint arXiv:2103.14958, 2021.
- C. Park, D. Kim, J. Han, and H. Yu, “Unsupervised attributed multiplex network embedding,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 5371–5378.
- X. Yang, Z. Song, I. King, and Z. Xu, “A survey on deep semi-supervised learning,” IEEE Transactions on Knowledge and Data Engineering, 2022.
- H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
- X. Zhu and Z. Ghahramani, “Learning from labeled and unlabeled data with label propagation,” 2002.
- T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information processing systems, 2013, pp. 3111–3119.
- J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “Line: Large-scale information network embedding,” in Proceedings of the 24th international conference on world wide web, 2015, pp. 1067–1077.
- P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
- C. L. Giles, K. D. Bollacker, and S. Lawrence, “Citeseer: An automatic citation indexing system,” in Proceedings of the third ACM conference on Digital libraries, 1998, pp. 89–98.
- O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann, “Pitfalls of graph neural network evaluation,” arXiv preprint arXiv:1811.05868, 2018.
- M. Liu, H. Gao, and S. Ji, “Towards deeper graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 338–348.
- N. Yin, L. Shen, M. Wang, X. Luo, Z. Luo, and D. Tao, “Omg: Towards effective graph classification against label noise,” IEEE Transactions on Knowledge and Data Engineering, 2023.
- J. Xia, H. Lin, Y. Xu, C. Tan, L. Wu, S. Li, and S. Z. Li, “Gnn cleaner: Label cleaner for graph structured data,” IEEE Transactions on Knowledge and Data Engineering, 2023.