Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RecDCL: Dual Contrastive Learning for Recommendation (2401.15635v2)

Published 28 Jan 2024 in cs.IR and cs.CL

Abstract: Self-supervised learning (SSL) has recently achieved great success in mining the user-item interactions for collaborative filtering. As a major paradigm, contrastive learning (CL) based SSL helps address data sparsity in Web platforms by contrasting the embeddings between raw and augmented data. However, existing CL-based methods mostly focus on contrasting in a batch-wise way, failing to exploit potential regularity in the feature dimension. This leads to redundant solutions during the representation learning of users and items. In this work, we investigate how to employ both batch-wise CL (BCL) and feature-wise CL (FCL) for recommendation. We theoretically analyze the relation between BCL and FCL, and find that combining BCL and FCL helps eliminate redundant solutions but never misses an optimal solution. We propose a dual contrastive learning recommendation framework -- RecDCL. In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs and to optimize the uniform distributions within users and items using a polynomial kernel for driving the representations to be orthogonal; The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations. Extensive experiments on four widely-used benchmarks and one industry dataset demonstrate that RecDCL can consistently outperform the state-of-the-art GNNs-based and SSL-based models (with an improvement of up to 5.65\% in terms of Recall@20). The source code is publicly available (https://github.com/THUDM/RecDCL).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint arXiv:2105.04906 (2021).
  2. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. arXiv preprint arXiv:2302.08191 (2023).
  3. Stochastic training of graph convolutional networks with variance reduction. arXiv preprint arXiv:1710.10568 (2017).
  4. Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 27–34.
  5. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning. PMLR, 1597–1607.
  6. Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 15750–15758.
  7. Debiased contrastive learning. In Advances in Neural Information Processing Systems. 8765–8775.
  8. Bert: Pre-training of deep bidirectional transformers for language understanding. 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics.
  9. Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings. In International Conference on Machine Learning. PMLR, 3294–3304.
  10. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 6894–6910.
  11. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 249–256.
  12. Michael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 297–304.
  13. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9729–9738.
  14. Candidate-aware Graph Contrastive Learning for Recommendation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (¡conf-loc¿, ¡city¿Taipei¡/city¿, ¡country¿Taiwan¡/country¿, ¡/conf-loc¿) (SIGIR ’23). Association for Computing Machinery, New York, NY, USA, 1670–1679. https://doi.org/10.1145/3539618.3591647
  15. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 639–648.
  16. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
  17. Roger A Horn and Charles R Johnson. 2012. Matrix Analysis. Cambridge university press.
  18. Towards the generalization of contrastive self-supervised learning. arXiv preprint arXiv:2111.00743 , year=2021 ([n. d.]).
  19. A survey on contrastive self-supervised learning. Technologies 9, 1 (2020), 2.
  20. TLDR: Twin learning for dimensionality reduction. Transactions of Machine Learning Research (2022). https://openreview.net/forum?id=86fhqdBUbx
  21. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  22. Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (Eds.). https://openreview.net/forum?id=4pf_pOo0Dt
  23. Bootstrapping user and item representations for one-class collaborative filtering. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 317–326.
  24. uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering. arXiv preprint arXiv:2305.12768 (2023).
  25. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference. 689–698.
  26. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. In Proceedings of the ACM Web Conference 2022. 2320–2329.
  27. Self-Supervised Learning via Maximum Entropy Coding. In Advances in Neural Information Processing Systems, Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=nJt27NQffr
  28. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
  29. Contrastive learning for representation degeneration problem in sequential recommendation. In ACM International Conference on Web Search and Data Mining. 813–823.
  30. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 , year=2012 ([n. d.]).
  31. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).
  32. Collaborative filtering recommender systems. In The adaptive web. Springer, 291–324.
  33. Recvae: A new variational autoencoder for top-n recommendations with implicit feedback. In Proceedings of the 13th International Conference on Web Search and Data Mining. 528–536.
  34. A note on connecting barlow twins with negative-sample-free contrastive learning. arXiv preprint arXiv:2104.13712 , year=2021 ([n. d.]).
  35. Towards Representation Alignment and Uniformity in Collaborative Filtering. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1816–1825.
  36. CL4CTR: A Contrastive Learning Framework for CTR Prediction. In ACM International Conference on Web Search and Data Mining (inprint).
  37. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning. PMLR, 9929–9939.
  38. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174.
  39. Self-supervised graph learning for recommendation. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 726–735.
  40. Simgrace: A simple framework for graph contrastive learning without data augmentation. In ACM Web Conference. 1070–1079.
  41. Enhanced graph learning for collaborative filtering via mutual information maximization. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 71–80.
  42. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In KDD 2018. 974–983.
  43. XSimGCL: Towards extremely simple graph contrastive learning for recommendation. arXiv preprint arXiv:2209.02544 (2022).
  44. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1294–1303.
  45. Self-supervised learning for recommender systems: A survey. arXiv preprint arXiv:2203.15876 (2022).
  46. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning. PMLR, 12310–12320.
  47. How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning. In International Conference on Learning Representations. https://openreview.net/forum?id=bwq6O4Cwdl
  48. ApeGNN: Node-Wise Adaptive Aggregation in GNNs for Recommendation. In Proceedings of the ACM Web Conference 2023. 759–769.
  49. Zero-CL: Instance and Feature decorrelation for negative-free symmetric contrastive learning. In International Conference on Learning Representations. https://openreview.net/forum?id=RAW9tCdVxLj
  50. Recbole: Towards a unified, comprehensive and efficient framework for recommendation algorithms. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 4653–4664.
  51. Contrastive learning for debiased candidate generation in large-scale recommender systems. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 3985–3995.
  52. SelfCF: A Simple Framework for Self-supervised Collaborative Filtering. arXiv preprint arXiv:2107.03019 (2021).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dan Zhang (171 papers)
  2. Yangliao Geng (2 papers)
  3. Wenwen Gong (4 papers)
  4. Zhongang Qi (40 papers)
  5. Zhiyu Chen (60 papers)
  6. Xing Tang (43 papers)
  7. Ying Shan (252 papers)
  8. Yuxiao Dong (119 papers)
  9. Jie Tang (302 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com