Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-View Subgraph Neural Networks: Self-Supervised Learning with Scarce Labeled Data (2404.12569v1)

Published 19 Apr 2024 in cs.LG and cs.AI

Abstract: While graph neural networks (GNNs) have become the de-facto standard for graph-based node classification, they impose a strong assumption on the availability of sufficient labeled samples. This assumption restricts the classification performance of prevailing GNNs on many real-world applications suffering from low-data regimes. Specifically, features extracted from scarce labeled nodes could not provide sufficient supervision for the unlabeled samples, leading to severe over-fitting. In this work, we point out that leveraging subgraphs to capture long-range dependencies can augment the representation of a node with homophily properties, thus alleviating the low-data regime. However, prior works leveraging subgraphs fail to capture the long-range dependencies among nodes. To this end, we present a novel self-supervised learning framework, called multi-view subgraph neural networks (Muse), for handling long-range dependencies. In particular, we propose an information theory-based identification mechanism to identify two types of subgraphs from the views of input space and latent space, respectively. The former is to capture the local structure of the graph, while the latter captures the long-range dependencies among nodes. By fusing these two views of subgraphs, the learned representations can preserve the topological properties of the graph at large, including the local structure and long-range dependencies, thus maximizing their expressiveness for downstream node classification tasks. Experimental results show that Muse outperforms the alternative methods on node classification tasks with limited labeled data.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (68)
  1. T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” in International Conference on Learning Representations (ICLR), 2017.
  2. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive Representation Learning on Large Graphs,” in Advances in Neural Information Processing Systems (NeurIPS) (), I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30, 2017, pp. 1–10.
  3. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph Attention Networks,” International Conference on Learning Representations (ICLR), 2017.
  4. W. Lin, Z. Gao, and B. Li, “Evaluating Trust in Online Social Networks with Graph Convolutional Networks,” in IEEE International Conference on Computer Communications, 2020.
  5. Z. Wang, L. Cao, W. Lin, M. Jiang, and K. C. Tan, “Robust graph meta-learning via manifold calibration with proxy subgraphs,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 12, pp. 15 224–15 232, Jun. 2023.
  6. Z. Wang, Q. Zeng, W. Lin, M. Jiang, and K. C. Tan, “Generating diagnostic and actionable explanations for fair graph neural networks,” in Proceedings of the Thirty-Eighth Conference on Association for the Advancement of Artificial Intelligence (AAAI), 2024.
  7. C. B. El Vaigh, N. Garcia, B. Renoust, C. Chu, Y. Nakashima, and H. Nagahara, “GCNBoost: Artwork Classification by Label Propagation through a Knowledge Graph,” in Proceedings of the 2021 International Conference on Multimedia Retrieval, ser. ICMR ’21, 2021, p. 92–100.
  8. Y. Hu, J. Gao, and C. Xu, “Learning Dual-Pooling Graph Neural Networks for Few-Shot Video Classification,” IEEE Transactions on Multimedia, vol. 23, pp. 4285–4296, 2021.
  9. R. Zhang, Y. Zhang, C. Lu, and X. Li, “Unsupervised Graph Embedding via Adaptive Graph Learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–10, 2022.
  10. L. Cai, J. Li, J. Wang, and S. Ji, “Line Graph Neural Networks for Link Prediction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 5103–5113, 2022.
  11. H. Cai, V. W. Zheng, and K. C.-C. Chang, “A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications,” IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 9, pp. 1616–1637, 2018.
  12. Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang, “Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning,” in International Conference on Learning Representations (ICLR), 2019.
  13. Li, Qimai and Han, Zhichao and Wu, Xiao-Ming, “Deeper Insights into Graph Convolutional Networks for Semi-supervised Learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
  14. H. Dong, J. Chen, F. Feng, X. He, S. Bi, Z. Ding, and P. Cui, “On the Equivalence of Decoupled Graph Convolution Network and Label Propagation,” in Proceedings of the Web Conference 2021, 2021, pp. 3651–3662.
  15. Z. Guo, C. Zhang, W. Yu, J. Herr, O. Wiest, M. Jiang, and N. V. Chawla, “Few-Shot Graph Learning for Molecular Property Prediction,” in Proceedings of the Web Conference.   New York, NY, USA: Association for Computing Machinery, 2021, p. 2559–2567.
  16. J. Chauhan, D. Nathani, and M. Kaul, “Few-shot Learning on Graphs via Super-classes Based on Graph Spectral Measures,” in International Conference on Learning Representations (ICLR), 2020.
  17. W. Lin, Z. Gao, and B. Li, “Shoestring: Graph-Based Semi-Supervised Classification With Severely Limited Labeled Data,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4173–4181.
  18. M. Zhu, X. Wang, C. Shi, H. Ji, and P. Cui, “Interpreting and unifying graph neural networks with an optimization framework,” in Proceedings of the Web Conference 2021, 2021, pp. 1215–1226.
  19. Y. Hu, Z.-A. Huang, R. Liu, X. Xue, X. Sun, L. Song, and K. C. Tan, “Source free semi-supervised transfer learning for diagnosis of mental disorders on fmri scans,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 11, pp. 13 778–13 795, 2023.
  20. X. Wu, S. hao Wu, J. Wu, L. Feng, and K. C. Tan, “Evolutionary computation in the era of large language model: Survey and roadmap,” arXiv, 2024.
  21. X. Hao, J. Wu, J. Yu, C. Xu, and K. C. Tan, “Typing to listen at the cocktail party: Text-guided target speaker extraction,” arXiv, 2023.
  22. X. Zhai, A. Oliver, A. Kolesnikov, and L. Beyer, “S4l: Self-supervised Semi-supervised Learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1476–1485.
  23. H. Lee, S. J. Hwang, and J. Shin, “Self-supervised Label Augmentation via Input Transformations,” in International Conference on Machine Learning (ICML).   PMLR, 2020, pp. 5714–5724.
  24. Y. Xie, Z. Xu, J. Zhang, Z. Wang, and S. Ji, “Self-Supervised Learning of Graph Neural Networks: A Unified Review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–24, 2022.
  25. C. Liu, L. Wen, Z. Kang, G. Luo, and L. Tian, “Self-supervised Consensus Representation Learning for Attributed Graph,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2654–2662.
  26. K. Sun, Z. Lin, and Z. Zhu, “Multi-stage Self-supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 5892–5899.
  27. X. Yang, C. Deng, Z. Dang, K. Wei, and J. Yan, “SelfSAGCN: Self-Supervised Semantic Alignment for Graph Convolution Network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 16 775–16 784.
  28. M. Xu, H. Wang, B. Ni, H. Guo, and J. Tang, “Self-supervised Graph-level Representation Learning with Local and Global Structure,” in International Conference on Machine Learning (ICML).   PMLR, 2021, pp. 11 548–11 558.
  29. Y. Jiao, Y. Xiong, J. Zhang, Y. Zhang, T. Zhang, and Y. Zhu, “Sub-graph Contrast for Scalable Self-supervised Graph Representation Learning,” in 2020 IEEE International Conference on Data Mining (ICDM).   IEEE, 2020, pp. 222–231.
  30. Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, “Graph Representation Learning via Graphical Mutual Information Maximization,” in Proceedings of the Web Conference 2020, 2020, pp. 259–270.
  31. P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep Graph Infomax,” International Conference on Learning Representations (ICLR), vol. 2, no. 3, p. 4, 2019.
  32. K. Huang and M. Zitnik, “Graph Meta Learning via Local Subgraphs,” in Advances in Neural Information Processing Systems (NeurIPS), H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33.   Curran Associates, Inc., 2020, pp. 5862–5874.
  33. O. Chapelle, B. Scholkopf, and A. Zien, Eds., “Semi-Supervised Learning,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 542–542, 2009.
  34. H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, “Geom-GCN: Geometric Graph Convolutional Networks,” in International Conference on Learning Representations (ICLR), 2019.
  35. J. B. Tenenbaum, V. d. Silva, and J. C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol. 290, no. 5500, pp. 2319–2323, 2000.
  36. Q. Sun, J. Li, H. Peng, J. Wu, Y. Ning, P. S. Yu, and L. He, “Sugar: Subgraph Neural Network with Reinforcement Pooling and Self-supervised Mutual Information Mechanism,” in Proceedings of the Web Conference 2021, 2021, pp. 2081–2091.
  37. J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang, “GCC: Graph Contrastive Coding for Graph Neural Network Pre-training,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 1150–1160.
  38. Y. Liu, Z. Li, S. Pan, C. Gong, C. Zhou, and G. Karypis, “Anomaly Detection on Attributed Networks via Contrastive Self-supervised Learning,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  39. Z. Liu, H. Zhang, Z. Chen, Z. Wang, and W. Ouyang, “Disentangling and unifying graph convolutions for skeleton-based action recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 143–152.
  40. S.-G. Fang, D. Huang, X.-S. Cai, C.-D. Wang, C. He, and Y. Tang, “Efficient multi-view clustering via unified and discrete bipartite graph learning,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  41. C. Zhang, B. Jiang, Z. Wang, J. Yang, Y. Lu, X. Wu, and W. Sheng, “Efficient multi-view semi-supervised feature selection,” Information Sciences, vol. 649, p. 119675, 2023.
  42. B. Jiang, C. Zhang, Y. Zhong, Y. Liu, Y. Zhang, X. Wu, and W. Sheng, “Adaptive collaborative fusion for multi-view semi-supervised classification,” Information Fusion, vol. 96, pp. 37–50, 2023.
  43. H. Ling, Z. Jiang, Y. Luo, S. Ji, and N. Zou, “Learning fair graph representations via automated data augmentations,” in The Eleventh International Conference on Learning Representations, 2022.
  44. J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros, “Generative Visual Manipulation on the Natural Image Manifold,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds.   Cham: Springer International Publishing, 2016, pp. 597–613.
  45. S. Huang, C. He, and R. Cheng, “Multimodal Image-to-Image Translation via a Single Generative Adversarial Network,” IEEE Transactions on Artificial Intelligence, 2022.
  46. Z. Wang, H. Hong, K. Ye, G.-E. Zhang, M. Jiang, and K. C. Tan, “Manifold Interpolation for Large-Scale Multiobjective Optimization via Generative Adversarial Networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2021.
  47. M. Jiang, Z. Wang, L. Qiu, S. Guo, X. Gao, and K. C. Tan, “A fast dynamic evolutionary multiobjective algorithm via manifold transfer learning,” IEEE Transactions on Cybernetics, vol. 51, no. 7, pp. 3417–3428, 2021.
  48. Q. Lin, Y. Ye, L. Ma, M. Jiang, and K. C. Tan, “Dynamic multiobjective evolutionary optimization via knowledge transfer and maintenance,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 2, pp. 936–949, 2024.
  49. W. Lin, Z. Gao, and B. Li, “Guardian: Evaluating Trust in Online Social Networks with Graph Convolutional Networks,” in IEEE Conference on Computer Communications, 2020, pp. 914–923.
  50. W. Wang, Y. Huang, Y. Wang, and L. Wang, “Generalized Autoencoder: A neural Network Framework for Dimensionality Reduction,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 490–497.
  51. B. Jiang, H. Chen, B. Yuan, and X. Yao, “Scalable graph-based semi-supervised learning through sparse bayesian model,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 12, pp. 2758–2771, 2017.
  52. S. T. Roweis and L. K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000.
  53. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” Advances in Neural Information Processing Systems (NeurIPS), vol. 14, 2001.
  54. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems (NeurIPS), vol. 27, 2014.
  55. P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in Proceedings of the 25th International Conference on Machine Learning (ICML), 2008, pp. 1096–1103.
  56. W. S. Torgerson, “Multidimensional scaling: I. theory and method,” Psychometrika, vol. 17, no. 4, pp. 401–419, 1952.
  57. P. L. Bartlett and S. Mendelson, “Rademacher and Gaussian Complexities: Risk Bounds and Structural Results,” Journal of Machine Learning Research, vol. 3, no. Nov, pp. 463–482, 2002.
  58. Y. Lu, “Rademacher Complexity in Simplex/l Set,” in Journal of Physics: Conference Series, vol. 1827, no. 1.   IOP Publishing, 2021, p. 012145.
  59. C. McDiarmid et al., “On the Method of Bounded Differences,” Surveys in Combinatorics, vol. 141, no. 1, pp. 148–188, 1989.
  60. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective Classification in Network Data,” AI Magazine, vol. 29, no. 3, pp. 93–93, 2008.
  61. Z. Meng, S. Liang, H. Bao, and X. Zhang, “Co-embedding Attributed Networks,” in Proceedings of the 12-th ACM International Conference on Web Search and Data Mining, 2019, pp. 393–401.
  62. X. Huang, J. Li, and X. Hu, “Label informed attributed network embedding,” in Proceedings of the tenth ACM international conference on web search and data mining, 2017, pp. 731–739.
  63. M. Liu, H. Gao, and S. Ji, “Towards Deeper Graph Neural Networks,” in ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 338–348.
  64. J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then Propagate: Graph Neural Networks meet Personalized PageRank,” in International Conference on Learning Representations (ICLR), 2019.
  65. Q. Li, X.-M. Wu, H. Liu, X. Zhang, and Z. Guan, “Label Efficient Semi-supervised Learning via Graph Filtering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9582–9591.
  66. T. Xie, B. Wang, and C.-C. J. Kuo, “Graphhop: An enhanced label propagation method for node classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 11, pp. 9287–9301, 2023.
  67. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying Graph Convolutional Networks,” in International Conference on Machine Learning (ICML).   PMLR, 2019, pp. 6861–6871.
  68. J. Chen, K. Gao, G. Li, and K. He, “NAGphormer: A tokenized graph transformer for node classification in large graphs,” in The Eleventh International Conference on Learning Representations, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.