Graph Contrastive Learning with Low-Rank Regularization and Low-Rank Attention for Noisy Node Classification
Abstract: Graph Neural Networks (GNNs) have achieved remarkable success in learning node representations and have shown strong performance in tasks such as node classification. However, recent findings indicate that the presence of noise in real-world graph data can substantially impair the effectiveness of GNNs. To address this challenge, we introduce a robust and innovative node representation learning method named Graph Contrastive Learning with Low-Rank Regularization, or GCL-LRR, which follows a two-stage transductive learning framework for node classification. In the first stage, the GCL-LRR encoder is optimized through prototypical contrastive learning while incorporating a low-rank regularization objective. In the second stage, the representations generated by GCL-LRR are employed by a linear transductive classifier to predict the labels of unlabeled nodes within the graph. Our GCL-LRR is inspired by the Low Frequency Property (LFP) of the graph data and its labels, and it is also theoretically motivated by our sharp generalization bound for transductive learning. To the best of our knowledge, our theoretical result is among the first to theoretically demonstrate the advantage of low-rank regularization in transductive learning, which is also supported by strong empirical results. To further enhance the performance of GCL-LRR, we present an improved model named GCL-LR-Attention, which incorporates a novel LR-Attention layer into GCL-LRR. GCL-LR-Attention reduces the kernel complexity of GCL-LRR and contributes to a tighter generalization bound, leading to improved performance. Extensive evaluations on standard benchmark datasets evidence the effectiveness and robustness of both GCL-LRR and GCL-LR-Attention in learning meaningful node representations. The code is available at https://github.com/Statistical-Deep-Learning/GCL-LR-Attention.
- Infinite mixture prototypes for few-shot learning. In International Conference on Machine Learning, pages 232–241. PMLR, 2019.
- Protoattend: Attention-based prototypical learning. The Journal of Machine Learning Research, 21(1):8691–8725, 2020.
- Spectral networks and locally connected networks on graphs. ICLR, 2014.
- Nrgnn: Learning a label noise-resistant graph neural network on sparsely and noisily labeled graphs. SIGKDD, 2021.
- Towards robust graph neural networks for noisy graphs with sparse labels. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 181–191, 2022.
- Data augmentation for deep graph learning: A survey. arXiv preprint arXiv:2202.08235, 2022.
- Adversarial graph contrastive learning with information regularization. In Proceedings of the ACM Web Conference 2022, pages 1362–1371, 2022a.
- Grand+: Scalable graph random neural networks. In Proceedings of the ACM Web Conference 2022, pages 3248–3258, 2022b.
- Training deep neural-networks using a noise adaptation layer. 2016.
- Hcsc: hierarchical contrastive selective coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9706–9715, 2022.
- Inductive representation learning on large graphs. NeurIPS, 30, 2017.
- Co-teaching: Robust training of deep neural networks with extremely noisy labels. pages 8536–8546, 2018.
- A survey of label-noise representation learning: Past, present and future. arXiv preprint arXiv:2011.04406, 2020.
- Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265, 2019.
- Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
- Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304–2313. PMLR, 2018.
- Sub-graph contrast for scalable self-supervised graph representation learning. In 2020 IEEE international conference on data mining (ICDM), pages 222–231. IEEE, 2020.
- Multi-scale contrastive siamese networks for self-supervised graph representation learning. In The 30th International Joint Conference on Artificial Intelligence (IJCAI), 2021.
- Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
- Augmentation-free self-supervised learning on graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 7372–7380, 2022.
- Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394, 2020.
- Prototypical contrastive learning of unsupervised representations. In ICLR, 2021.
- Selective-supervised contrastive learning with noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 316–325, 2022.
- Spectral augmentation for self-supervised learning on graphs. ICLR, 2023.
- Decoupling “when to update" from “how to update". In NeurIPS, pages 960–970, 2017.
- Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv preprint arXiv:2007.02901, 2020.
- Simple unsupervised graph representation learning. AAAI, 2022.
- Learning graph neural networks with noisy labels. In 2nd ICLR Learning from Limited Labeled Data (LLD) Workshop, 2019.
- Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 1944–1952, 2017.
- Graph representation learning via graphical mutual information maximization. In Proceedings of The Web Conference 2020, pages 259–270, 2020.
- Robust training of graph neural networks via noise governance. WSDM, 2022.
- Collective classification in network data. AI Magazine, 29(3):93–93, 2008.
- Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018.
- Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017.
- Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In ICLR, 2019.
- Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34:15920–15933, 2021.
- Bootstrapped representation learning on graphs. In ICLR 2021 Workshop on Geometrical and Topological Representation Learning, 2021.
- Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
- Deep graph infomax. In ICLR, 2019.
- Augmentation-free graph contrastive learning with performance guarantee. arXiv preprint arXiv:2204.04874, 2022.
- How powerful are graph neural networks? ICLR, 2019.
- Self-supervised graph-level representation learning with local and global structure. In International Conference on Machine Learning, pages 11548–11558. PMLR, 2021.
- Attribute prototype network for zero-shot learning. Advances in Neural Information Processing Systems, 33:21969–21980, 2020.
- Yingzhen Yang. Sharp generalization of transductive learning: A transductive local rademacher complexity approach. 2023. URL https://arxiv.org/pdf/2309.16858.pdf.
- Jo-src: A contrastive approach for combating noisy labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5192–5201, 2021.
- Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33:5812–5823, 2020.
- Graph contrastive learning automated. In International Conference on Machine Learning, pages 12121–12132. PMLR, 2021.
- How does disagreement help generalization against label corruption? 2019.
- Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.
- Spectral feature augmentation for graph contrastive learning and beyond. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 11289–11297, 2023.
- Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in neural information processing systems, 31, 2018.
- Simple spectral graph convolution. In International Conference on Learning Representations, 2020.
- Defending graph convolutional networks against dynamic graph perturbations via bayesian self-supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4405–4413, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.