Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation (2209.02544v4)

Published 6 Sep 2022 in cs.IR

Abstract: Contrastive learning (CL) has recently been demonstrated critical in improving recommendation performance. The underlying principle of CL-based recommendation models is to ensure the consistency between representations derived from different graph augmentations of the user-item bipartite graph. This self-supervised approach allows for the extraction of general features from raw data, thereby mitigating the issue of data sparsity. Despite the effectiveness of this paradigm, the factors contributing to its performance gains have yet to be fully understood. This paper provides novel insights into the impact of CL on recommendation. Our findings indicate that CL enables the model to learn more evenly distributed user and item representations, which alleviates the prevalent popularity bias and promoting long-tail items. Our analysis also suggests that the graph augmentations, previously considered essential, are relatively unreliable and of limited significance in CL-based recommendation. Based on these findings, we put forward an eXtremely Simple Graph Contrastive Learning method (XSimGCL) for recommendation, which discards the ineffective graph augmentations and instead employs a simple yet effective noise-based embedding augmentation to generate views for CL. A comprehensive experimental study on four large and highly sparse benchmark datasets demonstrates that, though the proposed method is extremely simple, it can smoothly adjust the uniformity of learned representations and outperforms its graph augmentation-based counterparts by a large margin in both recommendation accuracy and training efficiency. The code and used datasets are released at https://github.com/Coder-Yu/SELFRec.

An Overview of XSimGCL: Simplifying Graph Contrastive Learning for Recommendation

The paper "XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation" explores the utilization of contrastive learning (CL) in improving recommendation systems, specifically focusing on graph-based methods. The authors present an innovative and streamlined approach called eXtremely Simple Graph Contrastive Learning (XSimGCL), effectively challenging the necessity of complex graph augmentations prevalent in existing contrastive recommendation frameworks.

Key Insights

  1. The Role of Contrastive Learning: CL has gained traction in various applications due to its potential to derive meaningful patterns from unlabeled data, particularly in addressing the common issue of data sparsity. The paper provides evidence that the contrastive loss function, InfoNCE, is critical in balancing learned user and item representations, thereby alleviating popularity bias and enhancing the visibility of long-tail items.
  2. Questioning Graph Augmentations: Through comparative experiments, the authors reveal that while traditional graph augmentations such as edge and node dropout contribute to performance, their significance is overshadowed by the exploitation of representation-level uniformity guided by InfoNCE. This leads them to propose that structural augmentations could be less essential than previously assumed.
  3. Proposed Method - XSimGCL: In response to the insights gained, the paper introduces XSimGCL, a method that abandons structural graph augmentations in favor of simple noise-based embedding augmentations. This approach directly adjusts the uniformity of the learned representations, resulting in superior recommendation accuracy and heightened training efficiency.

Experimental Results

Comprehensive evaluations on four large, sparse datasets demonstrate the advantages of XSimGCL over existing graph augmentation-based methods. XSimGCL achieves substantial improvements in both recommendation accuracy and training speed. Notably, the method outperforms its predecessor, SimGCL, due to its simplified architecture and effective use of cross-layer contrast, which exploits high-frequency graph information.

The results further indicate that the proposed noise-based augmentation can seamlessly control representation uniformity, optimizing performance through dynamic adjustments. The experiments validate theoretical claims regarding the efficacy of this approach, bolstered by analysis through the lens of graph spectrum.

Implications and Future Directions

The findings from the paper present significant implications for the development of recommendation systems. By demonstrating the redundancy of complex graph augmentations when combined with carefully designed contrastive objectives, the paper paves the way for more efficient and effective recommendation models tailored for large-scale and sparse data environments.

Future work may delve into exploring the application of noise-based contrastive learning across diverse domains beyond recommendation, as well as investigating adaptive noise mechanisms that could further enhance the flexibility and robustness of such models.

In conclusion, XSimGCL represents a promising advancement in contrastive learning for recommendation systems, challenging conventional techniques and pointing towards a more efficient path forward.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” Technologies, vol. 9, no. 1, p. 2, 2021.
  2. X. Liu, F. Zhang, Z. Hou, Z. Wang, L. Mian, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” arXiv preprint arXiv:2006.08218, vol. 1, no. 2, 2020.
  3. J. Yu, H. Yin, X. Xia, T. Chen, J. Li, and Z. Huang, “Self-supervised learning for recommender systems: A survey,” arXiv preprint arXiv:2006.07733, 2022.
  4. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” NeurIPS, vol. 33, 2020.
  5. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in ICML, 2020, pp. 1597–1607.
  6. T. Gao, X. Yao, and D. Chen, “Simcse: Simple contrastive learning of sentence embeddings,” in EMNLP, 2021, pp. 6894–6910.
  7. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020, pp. 9729–9738.
  8. J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar et al., “Bootstrap your own latent: A new approach to self-supervised learning,” NeurIPS, 2020.
  9. M. Singh, “Scalability and sparsity issues in recommender datasets: a survey,” Knowledge and Information Systems, vol. 62, no. 1, pp. 1–43, 2020.
  10. T. T. Nguyen, M. Weidlich, D. C. Thang, H. Yin, and N. Q. V. Hung, “Retaining data from streams of social platforms with minimal regret,” in IJCAI, 2017, pp. 2850–2856.
  11. T. Chen, H. Yin, Q. V. H. Nguyen, W.-C. Peng, X. Li, and X. Zhou, “Sequence-aware factorization machines for temporal predictive analytics,” in 2020 IEEE 36th International Conference on Data Engineering (ICDE).   IEEE, 2020, pp. 1405–1416.
  12. J. Wu, X. Wang, F. Feng, X. He, L. Chen, J. Lian, and X. Xie, “Self-supervised graph learning for recommendation,” in SIGIR, 2021, pp. 726–735.
  13. J. Yu, H. Yin, J. Li, Q. Wang, N. Q. V. Hung, and X. Zhang, “Self-supervised multi-channel hypergraph convolutional network for social recommendation,” in WWW, 2021, pp. 413–424.
  14. X. Xia, H. Yin, J. Yu, Q. Wang, L. Cui, and X. Zhang, “Self-supervised hypergraph convolutional networks for session-based recommendation,” in AAAI, 2021, pp. 4503–4511.
  15. K. Zhou, H. Wang, W. X. Zhao, Y. Zhu, S. Wang, F. Zhang, Z. Wang, and J.-R. Wen, “S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization,” in CIKM, 2020, pp. 1893–1902.
  16. C. Zhou, J. Ma, J. Zhang, J. Zhou, and H. Yang, “Contrastive learning for debiased candidate generation in large-scale recommender systems,” in KDD, 2021, pp. 3985–3995.
  17. J. Yu, H. Yin, M. Gao, X. Xia, X. Zhang, and N. Q. V. Hung, “Socially-aware self-supervised tri-training for recommendation,” in KDD, F. Zhu, B. C. Ooi, and C. Miao, Eds.   ACM, 2021, pp. 2084–2092.
  18. Z. Lin, C. Tian, Y. Hou, and W. X. Zhao, “Improving graph collaborative filtering with neighborhood-enriched contrastive learning,” in WWW, 2022, pp. 2320–2329.
  19. P. Bachman, R. D. Hjelm, and W. Buchwalter, “Learning representations by maximizing mutual information across views,” NeurIPS, pp. 15 509–15 519, 2019.
  20. X. Zhou, A. Sun, Y. Liu, J. Zhang, and C. Miao, “Selfcf: A simple framework for self-supervised collaborative filtering,” arXiv preprint arXiv:2107.03019, 2021.
  21. D. Lee, S. Kang, H. Ju, C. Park, and H. Yu, “Bootstrapping user and item representations for one-class collaborative filtering,” in SIGIR, F. Diaz, C. Shah, T. Suel, P. Castells, R. Jones, and T. Sakai, Eds., 2021, pp. 1513–1522.
  22. A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  23. J. Chen, H. Dong, X. Wang, F. Feng, M. Wang, and X. He, “Bias and debias in recommender system: A survey and future directions,” arXiv preprint arXiv:2010.03240, 2020.
  24. J. Yu, H. Yin, X. Xia, T. Chen, L. Cui, and Q. V. H. Nguyen, “Are graph augmentations necessary? simple graph contrastive learning for recommendation,” in SIGIR, 2022, pp. 1294–1303.
  25. X. Xie, F. Sun, Z. Liu, S. Wu, J. Gao, J. Zhang, B. Ding, and B. Cui, “Contrastive learning for sequential recommendation,” in ICDE.   IEEE, 2022, pp. 1259–1273.
  26. J. Zhang, M. Gao, J. Yu, L. Guo, J. Li, and H. Yin, “Double-scale self-supervised hypergraph learning for group recommendation,” in CIKM, 2021, pp. 2557–2567.
  27. T. Yao, X. Yi, D. Z. Cheng, F. Yu, T. Chen, A. Menon, L. Hong, E. H. Chi, S. Tjoa, J. Kang et al., “Self-supervised learning for large-scale item recommendations,” in CIKM, 2021, pp. 4321–4330.
  28. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “Lightgcn: Simplifying and powering graph convolution network for recommendation,” in SIGIR.   ACM, 2020, pp. 639–648.
  29. S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “Bpr: Bayesian personalized ranking from implicit feedback,” in UAI.   AUAI Press, 2009, pp. 452–461.
  30. R. He and J. McAuley, “Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering,” in WWW, 2016, pp. 507–517.
  31. T. Wang and P. Isola, “Understanding contrastive representation learning through alignment and uniformity on the hypersphere,” in ICML, 2020, pp. 9929–9939.
  32. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
  33. Z. I. Botev, J. F. Grotowski, and D. P. Kroese, “Kernel density estimation via diffusion,” The annals of Statistics, vol. 38, no. 5, pp. 2916–2957, 2010.
  34. H. Yin, B. Cui, J. Li, J. Yao, and C. Chen, “Challenging the long tail recommendation,” Proc. VLDB Endow., vol. 5, no. 9, pp. 896–907, 2012.
  35. D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in AAAI, vol. 34, no. 04, 2020, pp. 3438–3445.
  36. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in ICLR, Y. Bengio and Y. LeCun, Eds., 2015.
  37. X. Zhang, F. X. Yu, S. Kumar, and S.-F. Chang, “Learning spread-out local feature descriptors,” in CVPR, 2017, pp. 4595–4603.
  38. Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning?” NeurIPS, vol. 33, pp. 6827–6839, 2020.
  39. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in ICLR, 2017.
  40. N. Liu, X. Wang, D. Bo, C. Shi, and J. Pei, “Revisiting graph contrastive learning from the perspective of graph spectrum,” NeurIPS, 2022.
  41. J. Ni, J. Li, and J. McAuley, “Justifying recommendations using distantly-labeled reviews and fine-grained aspects,” in EMNLP-IJCNLP, 2019, pp. 188–197.
  42. T. Huang, Y. Dong, M. Ding, Z. Yang, W. Feng, X. Wang, and J. Tang, “Mixgcf: An improved training method for graph neural network-based recommender systems,” pp. 665–674, 2021.
  43. X. Wang, H. Jin, A. Zhang, X. He, T. Xu, and T.-S. Chua, “Disentangled graph collaborative filtering,” in SIGIR, 2020, pp. 1001–1010.
  44. H. Ye, X. Li, Y. Yao, and H. Tong, “Towards robust neural graph collaborative filtering via structure denoising and embedding perturbation,” ACM Transactions on Information Systems, vol. 41, no. 3, pp. 1–28, 2023.
  45. C. Gao, X. Wang, X. He, and Y. Li, “Graph neural networks for recommender system,” in WSDM, 2022, pp. 1623–1625.
  46. S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: a survey,” CSUR, 2020.
  47. T. Chen, H. Yin, G. Ye, Z. Huang, Y. Wang, and M. Wang, “Try this instead: Personalized and interpretable substitute recommendation,” in SIGIR, 2020, pp. 891–900.
  48. Q. Wang, H. Yin, Z. Hu, D. Lian, H. Wang, and Z. Huang, “Neural memory streaming recommender networks with adversarial training,” in KDD, 2018, pp. 2467–2475.
  49. Q. Wang, H. Yin, T. Chen, Z. Huang, H. Wang, Y. Zhao, and N. Q. Viet Hung, “Next point-of-interest recommendation on resource-constrained mobile devices,” in WWW, 2020, pp. 906–916.
  50. H. Yin, B. Cui, Z. Huang, W. Wang, X. Wu, and X. Zhou, “Joint modeling of users’ interests and mobility patterns for point-of-interest recommendation,” in ACM Multimedia, 2015, pp. 819–822.
  51. J. Yu, H. Yin, J. Li, M. Gao, Z. Huang, and L. Cui, “Enhance social recommendation with adversarial graph convolutional networks,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  52. S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, and T. Tan, “Session-based recommendation with graph neural networks,” in AAAI, vol. 33, no. 01, 2019, pp. 346–353.
  53. X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, “Neural graph collaborative filtering,” in SIGIR, 2019, pp. 165–174.
  54. L. Chen, L. Wu, R. Hong, K. Zhang, and M. Wang, “Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach,” in AAAI, vol. 34, no. 01, 2020, pp. 27–34.
  55. W. Yu and Z. Qin, “Graph convolutional network for recommendation with low-pass collaborative filters,” in ICML, 2020, pp. 10 936–10 945.
  56. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in ICML, 2019, pp. 6861–6871.
  57. J. Yu, M. Gao, J. Li, H. Yin, and H. Liu, “Adaptive implicit friends identification over heterogeneous network for social recommendation,” in CIKM.   ACM, 2018, pp. 357–366.
  58. J. Ma, C. Zhou, H. Yang, P. Cui, X. Wang, and W. Zhu, “Disentangled self-supervision in sequential recommenders,” in KDD, 2020, pp. 483–491.
  59. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” NeurIPS, vol. 30, 2017.
  60. R. Qiu, Z. Huang, H. Yin, and Z. Wang, “Contrastive learning for representation degeneration problem in sequential recommendation,” in WSDM, 2022, pp. 813–823.
  61. X. Xia, H. Yin, J. Yu, Q. Wang, G. Xu, and Q. V. H. Nguyen, “On-device next-item recommendation with self-supervised knowledge distillation,” in SIGIR, 2022, pp. 546–555.
  62. X. Xia, J. Yu, Q. Wang, C. Yang, N. Q. V. Hung, and H. Yin, “Efficient on-device session-based recommendation,” ACM TOIS, 2023.
  63. X. Long, C. Huang, Y. Xu, H. Xu, P. Dai, L. Xia, and L. Bo, “Social recommendation with self-supervised metagraph informax network,” in CIKM, 2021, pp. 1160–1169.
  64. R. Xie, Q. Liu, L. Wang, S. Liu, B. Zhang, and L. Lin, “Contrastive cross-domain recommendation in matching,” in KDD, 2022, pp. 4226–4236.
  65. X. Xia, H. Yin, J. Yu, Y. Shao, and L. Cui, “Self-supervised graph co-training for session-based recommendation,” in CIKM, 2021, pp. 2180–2190.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junliang Yu (34 papers)
  2. Xin Xia (171 papers)
  3. Tong Chen (200 papers)
  4. Lizhen Cui (66 papers)
  5. Nguyen Quoc Viet Hung (18 papers)
  6. Hongzhi Yin (210 papers)
Citations (121)
Github Logo Streamline Icon: https://streamlinehq.com