Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Comprehensive Survey on Self-Supervised Learning for Recommendation (2404.03354v2)

Published 4 Apr 2024 in cs.IR and cs.AI

Abstract: Recommender systems play a crucial role in tackling the challenge of information overload by delivering personalized recommendations based on individual user preferences. Deep learning techniques, such as RNNs, GNNs, and Transformer architectures, have significantly propelled the advancement of recommender systems by enhancing their comprehension of user behaviors and preferences. However, supervised learning methods encounter challenges in real-life scenarios due to data sparsity, resulting in limitations in their ability to learn representations effectively. To address this, self-supervised learning (SSL) techniques have emerged as a solution, leveraging inherent data structures to generate supervision signals without relying solely on labeled data. By leveraging unlabeled data and extracting meaningful representations, recommender systems utilizing SSL can make accurate predictions and recommendations even when confronted with data sparsity. In this paper, we provide a comprehensive review of self-supervised learning frameworks designed for recommender systems, encompassing a thorough analysis of over 170 papers. We conduct an exploration of nine distinct scenarios, enabling a comprehensive understanding of SSL-enhanced recommenders in different contexts. For each domain, we elaborate on different self-supervised learning paradigms, namely contrastive learning, generative learning, and adversarial learning, so as to present technical details of how SSL enhances recommender systems in various contexts. We consistently maintain the related open-source materials at https://github.com/HKUDS/Awesome-SSLRec-Papers.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (222)
  1. Wasserstein generative adversarial networks. In ICML. 214–223.
  2. Mutual information neural estimation. In ICML. PMLR, 531–540.
  3. Contrastive curriculum learning for sequential user behavior modeling via data augmentation. In CIKM. 3737–3746.
  4. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
  5. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023).
  6. LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. arXiv preprint arXiv:2302.08191 (2023).
  7. Contrastive Cross-Domain Sequential Recommendation. In CIKM. 138–147.
  8. Bipartite graph embedding via mutual information maximization. In WSDM. 635–643.
  9. Cross-domain recommendation to cold-start users via variational information bottleneck. In ICDE. IEEE, 2209–2223.
  10. Rating augmentation with generative adversarial networks towards accurate collaborative filtering. In WWW. 2616–2622.
  11. Cfgan: A generic collaborative filtering framework based on generative adversarial networks. In CIKM. 137–146.
  12. Gong Chen and Xiaoyuan Xie. 2023. ML-KGCL: Multi-level Knowledge Graph Contrastive Learning for Recommendation. In DASFAA. 253–268.
  13. Fast variational autoencoder with inverted multi-index for collaborative filtering. In WWW. 1944–1954.
  14. Heterogeneous graph contrastive learning for recommendation. In WSDM. 544–552.
  15. Thinking inside the box: learning hypercube representations for group recommendation. In SIGIR. 1664–1673.
  16. Intent contrastive learning for sequential recommendation. In WWW. 2172–2182.
  17. CLCDR: Contrastive Learning for Cross-Domain Recommendation to Cold-Start Users. In ICONIP. Springer, 331–342.
  18. Learning transferable user representations with sequential behaviors via contrastive pre-training. In ICDM. 51–60.
  19. Yoon-Sik Cho and Min-hwan Oh. 2022. Stochastic-Expert Variational Autoencoder for Collaborative Filtering. In WWW. 2482–2490.
  20. Uniform sequence better: Time interval aware data augmentation for sequential recommendation. In AAAI. 4225–4232.
  21. Li Deng and Yang Liu. 2018. Deep learning in natural language processing. Springer.
  22. Contrastive Learning with Bidirectional Transformers for Sequential Recommendation. In CIKM. 396–405.
  23. Socially-aware dual contrastive learning for cold-start recommendation. In SIGIR. 1927–1932.
  24. Deep adversarial social recommendation. arXiv (2019).
  25. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933 (2022).
  26. Self-supervised graph neural networks for multi-behavior recommendation. In IJCAI.
  27. A review on generative adversarial networks: Algorithms, theory, and applications. TKDE 35, 4 (2021), 3313–3332.
  28. DA-DAN: A Dual Adversarial Domain Adaption Network for Unsupervised Non-overlapping Cross-domain Recommendation. TOIS (2023), 1–27.
  29. An attentional recurrent neural network for personalized next location recommendation. In AAAI, Vol. 34. 83–90.
  30. Miss: Multi-interest self-supervised learning framework for click-through rate prediction. In ICDE. 727–740.
  31. Disentangled Representations Learning for Multi-target Cross-domain Recommendation. TOIS (2023), 1–27.
  32. Michael Gutmann and Aapo Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 297–304.
  33. Feature-Level Deeper Self-Attention Network With Contrastive Learning for Sequential Recommendation. TKDE (2023).
  34. Meta-optimized Joint Generative and Contrastive Learning for Sequential Recommendation. arXiv (2023).
  35. Masked autoencoders are scalable vision learners. In CVPR. 16000–16009.
  36. Candidate-aware Graph Contrastive Learning for Recommendation. In SIGIR. 1670–1679.
  37. Learning deep representations by mutual information estimation and maximization. arXiv (2018).
  38. Graphmae: Self-supervised masked graph autoencoders. In KDD. 594–604.
  39. Adaptive Adversarial Contrastive Learning for Cross-Domain Recommendation. TKDD (2023), 1–34.
  40. Knowledge-aware coupled graph neural network for social recommendation. In AAAI. 4115–4122.
  41. A survey on generative adversarial networks: Variants, applications, and training. ACM Computing Surveys (CSUR) 54, 8 (2021), 1–49.
  42. A survey on contrastive self-supervised learning. Technologies 9, 1 (2020), 2.
  43. Dietmar Jannach and Malte Ludewig. 2017. When recurrent neural networks meet the neighborhood for session-based recommendation. In Recsys. 306–310.
  44. Relationship-aware contrastive learning for social recommendations. Information Sciences (2023), 778–797.
  45. Sequential recommendation with bidirectional chronological augmentation of transformer. arXiv (2021).
  46. Adaptive graph contrastive learning for recommendation. In KDD. 4252–4261.
  47. DiffKG: Knowledge Graph Diffusion Model for Recommendation. (2024), 313–321.
  48. Contrastive Self-supervised Learning in Recommender Systems: A Survey. arXiv preprint arXiv:2303.09902 (2023).
  49. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In ICDM. 197–206.
  50. Diederik P Kingma and Max Welling. 2013a. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
  51. Diederik P Kingma and Max Welling. 2013b. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
  52. Vijay Konda and John Tsitsiklis. 1999. Actor-critic algorithms. Advances in neural information processing systems 12 (1999).
  53. A modular adversarial approach to social recommendation. In CIKM. 1753–1762.
  54. An adversarial approach to improve long-tail performance in neural collaborative filtering. In CIKM. 1491–1494.
  55. Disentangled Negative Sampling for Collaborative Filtering. In WSDM. 96–104.
  56. uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering. arXiv preprint arXiv:2305.12768 (2023).
  57. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021).
  58. SGCCL: siamese graph contrastive consensus learning for personalized recommendation. In WSDM. 589–597.
  59. Improving Micro-video Recommendation via Contrastive Multiple Interests. In SIGIR. 2377–2381.
  60. Graph Transformer for Recommendation. arXiv preprint arXiv:2306.02330 (2023).
  61. RecGURU: Adversarial learning of generalized user representations for cross-domain recommendation. In WSDM. 571–581.
  62. Maskgae: Masked graph modeling meets graph autoencoders. arXiv preprint arXiv:2205.10053 (2022).
  63. Self-Supervised Group Graph Collaborative Filtering for Group Recommendation. In WSDM. 69–77.
  64. Prompt distillation for efficient llm-based recommendation. In CIKM. 1348–1357.
  65. Intra-and Inter-behavior Contrastive Learning for Multi-behavior Recommendation. In DASFAA. Springer, 147–162.
  66. Xiaopeng Li and James She. 2017. Collaborative variational autoencoder for recommender systems. In KDD. 305–314.
  67. Multi-Intention Oriented Contrastive Learning for Sequential Recommendation. In WSDM. 411–419.
  68. Hyperbolic hypergraphs for sequential recommendation. In CIKM. 988–997.
  69. Variational autoencoders for collaborative filtering. In WWW. 689–698.
  70. Dual contrastive network for sequential recommendation. In SIGIR. 2686–2691.
  71. Improving graph collaborative filtering with neighborhood-enriched contrastive learning. In WWW. 2320–2329.
  72. Visual instruction tuning. NeurIPS 36 (2024).
  73. Diffusion augmentation for sequential recommendation. In CIKM. 1576–1586.
  74. Exploiting variational domain-invariant user embedding for partially overlapped cross domain recommendation. In SIGIR. 312–321.
  75. K-bert: Enabling language representation with knowledge graph. In AAAI, Vol. 34. 2901–2908.
  76. Self-supervised learning: Generative or contrastive. TKDE 35, 1 (2021), 857–876.
  77. Graph self-supervised learning: A survey. TKDE 35, 6 (2022), 5879–5900.
  78. Variational Collective Graph AutoEncoder for Multi-behavior Recommendation. In ICDM. IEEE, 438–447.
  79. Self-Supervised Graph Neural Networks for Sequential Recommendation. arXiv (2024).
  80. Contrastive self-supervised sequential recommendation with robust augmentation. arXiv (2021).
  81. Augmenting sequential recommendation with pseudo-prior items via reversely pre-training transformer. In SIGIR. 1608–1612.
  82. Contrastive learning for recommender system. arXiv preprint arXiv:2101.01317 (2021).
  83. Multi-Modal Contrastive Pre-training for Recommendation. In ICMR. 99–108.
  84. Social recommendation with self-supervised metagraph informax network. In CIKM. 1160–1169.
  85. SRecGAN: pairwise adversarial training for sequential recommendation. In DASFAA. 20–35.
  86. SSRGAN: A Generative Adversarial Network for Streaming Sequential Recommendation. In DASFAA. 36–52.
  87. Learning disentangled representations for recommendation. NeurIPS 32 (2019).
  88. Disentangled self-supervision in sequential recommenders. In KDD. 483–491.
  89. Improving Transformer-Based Sequential Recommenders through Preference Editing. TOIS (2023).
  90. CrossCBR: cross-view contrastive learning for bundle recommendation. In KDD. 1233–1241.
  91. Jarana Manotumruksa and Emine Yilmaz. 2020. Sequential-based adversarial optimisation for personalised top-n item recommendation. In SIGIR. 2045–2048.
  92. SimpleX: A simple and strong baseline for collaborative filtering. In CIKM. 1243–1252.
  93. Enhancing sequential recommendation with contrastive Generative Adversarial Network. Elsevier IPM (2023).
  94. MIC: model-agnostic integrated cross-channel recommender. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 3400–3409.
  95. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
  96. Zhiqiang Pan and Honghui Chen. 2021. Collaborative knowledge-enhanced recommendation with self-supervisions. Mathematics (2021), 2129.
  97. KRec-C2: A Knowledge Graph Enhanced Recommendation with Context Awareness and Contrastive Learning. In DASFAA. Springer, 3–20.
  98. MixMBR: Contrastive Learning for Multi-behavior Recommendation. In DASFAA. Springer, 434–445.
  99. Intent Contrastive Learning with Cross Subsequences for Sequential Recommendation. In WSDM. 548–556.
  100. Memory augmented multi-instance contrastive predictive coding for sequential recommendation. In ICDM. 519–528.
  101. Contrastive learning for representation degeneration problem in sequential recommendation. In WSDM. 813–823.
  102. U-BERT: Pre-training user representations for improved recommendation. In AAAI. 4320–4327.
  103. Learning transferable visual models from natural language supervision. In ICML. PMLR, 8748–8763.
  104. BVAE: Behavior-aware Variational Autoencoder for Multi-Behavior Multi-Task Recommendation. In RecSys. 625–636.
  105. Sequential recommendation with self-attentive multi-adversarial network. In SIGIR. 89–98.
  106. Representation learning with large language models for recommendation. In WWW.
  107. Sslrec: A self-supervised learning framework for recommendation. In WSDM. 567–575.
  108. Disentangled contrastive collaborative filtering. In SIGIR. 1137–1146.
  109. Distillation-Enhanced Graph Masked Autoencoders for Bundle Recommendation. In SIGIR. 1660–1669.
  110. High-resolution image synthesis with latent diffusion models. In CVPR. 10684–10695.
  111. Sequential Variational Autoencoders for Collaborative Filtering. In WSDM. 600–608.
  112. Towards source-aligned variational models for cross-domain recommendation. In RecSys. 176–186.
  113. Groupim: A mutual information maximization framework for neural group recommendation. In SIGIR. 1279–1288.
  114. Recvae: A new variational autoencoder for top-n recommendations with implicit feedback. In WSDM. 528–536.
  115. One4all user representation for recommender systems in e-commerce. arXiv (2021).
  116. Cross-domain recommendation via adversarial adaptation. In CIKM. 1808–1817.
  117. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In CIKM. 1441–1450.
  118. Self-Supervised Interest Transfer Network via Prototypical Contrastive Learning for Recommendation. arXiv (2023).
  119. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems 12 (1999).
  120. Graphgpt: Graph instruction tuning for large language models. arXiv (2023).
  121. HiGPT: Heterogeneous Graph Language Model. arXiv (2024).
  122. Predictive and contrastive: Dual-auxiliary learning for recommendation. TCSS (2022).
  123. Self-supervised learning for multimedia recommendation. TMM (2022).
  124. Temporal Contrastive Pre-Training for Sequential Recommendation. In CIKM. 1925–1934.
  125. Learning to denoise unreliable interactions for graph collaborative filtering. In SIGIR. 122–132.
  126. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  127. Bilateral variational autoencoder for collaborative filtering. In WSDM. 292–300.
  128. Attention is all you need. NeurIPS 30 (2017).
  129. Digress: Discrete denoising diffusion for graph generation. arXiv preprint arXiv:2209.14734 (2022).
  130. Deep learning for computer vision: A brief review. Computational intelligence and neuroscience 2018 (2018).
  131. Pre-training graph neural network for cross domain recommendation. In CogMI. IEEE, 140–145.
  132. Sequential Recommendation with Multiple Contrast Signals. TOIS (2023).
  133. Towards representation alignment and uniformity in collaborative filtering. In KDD. 1816–1825.
  134. Adversarial binary collaborative filtering for implicit feedback. In AAAI, Vol. 33. 5248–5255.
  135. Knowledge-Adaptive Contrastive Learning for Recommendation. In WSDM. 535–543.
  136. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In SIGIR. 515–524.
  137. Explanation guided contrastive learning for sequential recommendation. In CIKM. 2017–2027.
  138. Self-supervised dual-channel attentive network for session-based social recommendation. In ICDE. 2034–2045.
  139. Enhancing collaborative filtering with generative augmentation. In KDD. 548–556.
  140. Causal Disentangled Variational Auto-Encoder for Preference Understanding in Recommendation. arXiv preprint arXiv:2304.07922 (2023).
  141. Denoised self-augmented learning for social recommendation. In IJCAI. 2324–2331.
  142. Diffusion Recommender Model. arXiv preprint arXiv:2304.04971 (2023).
  143. Neural graph collaborative filtering. In SIGIR. 165–174.
  144. Contrastvae: Contrastive variational autoencoder for sequential recommendation. In CIKM. 2056–2066.
  145. Multi-level contrastive learning framework for sequential recommendation. In CIKM. 2098–2107.
  146. Contrastive meta learning with behavior multiplicity for recommendation. In WSDM. 1120–1128.
  147. Multi-Modal Self-Supervised Learning for Recommendation. In WWW. 790–800.
  148. Llmrec: Large language models with graph augmentation for recommendation. In WSDM.
  149. PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning. In WWW.
  150. Multi-relational contrastive learning for recommendation. In RecSys. 338–349.
  151. Hierarchical and Contrastive Representation Learning for Knowledge-aware Recommendation. arXiv preprint arXiv:2304.07506 (2023).
  152. Ptum: Pre-training user model from unlabeled user behaviors via self-supervision. arXiv (2020).
  153. Disentangled contrastive learning for social recommendation. In CIKM. 4570–4574.
  154. Self-supervised graph learning for recommendation. In SIGIR. 726–735.
  155. SSE-PT: Sequential recommendation via personalized transformer. In Recsys. 328–337.
  156. Self-supervised learning on graphs: Contrastive, generative, or predictive. TKDE (2021).
  157. Dual Intents Graph Modeling for User-centric Group Discovery. In CIKM. 2716–2725.
  158. Multi-view multi-behavior contrastive learning in recommendation. In DASFAA. Springer, 166–182.
  159. Diff4Rec: Sequential Recommendation with Curriculum-scheduled Diffusion Augmentation. In ACM MM. 9329–9335.
  160. Automated Self-Supervised Learning for Recommendation. In WWW. 992–1002.
  161. Hypergraph contrastive collaborative filtering. In SIGIR. 70–79.
  162. Self-supervised hypergraph transformer for recommender systems. In KDD. 2100–2109.
  163. OpenGraph: Towards Open Graph Foundation Models. arXiv:2403.01121 [cs.LG]
  164. Self-supervised graph co-training for session-based recommendation. In CIKM. 2180–2190.
  165. Self-supervised hypergraph convolutional networks for session-based recommendation. In AAAI. 4503–4511.
  166. Uprec: User-aware pre-training for recommender systems. arXiv (2021).
  167. TransA: An adaptive approach for knowledge graph embedding. arXiv (2015).
  168. CATCL: Joint Cross-Attention Transfer and Contrastive Learning for Cross-Domain Recommendation. In DASFAA. Springer, 446–461.
  169. Contrastive cross-domain recommendation in matching. In KDD. 4226–4236.
  170. Contrastive learning for sequential recommendation. In ICDE. 1259–1273.
  171. Self-supervised learning of graph neural networks: A unified review. TPAMI 45, 2 (2022), 2412–2429.
  172. Adversarial and contrastive variational autoencoder for sequential recommendation. In WWW. 449–459.
  173. Multi-behavior self-supervised learning for recommendation. In SIGIR. 496–505.
  174. Hongrui Xuan and Bohan Li. 2023. Temporal-Aware Multi-behavior Contrastive Recommendation. In DASFAA. Springer, 269–285.
  175. Knowledge Enhancement for Contrastive Multi-Behavior Recommendation. In WSDM. 195–203.
  176. Yuner Xuan. 2024. Diffusion Cross-domain Recommendation. arXiv (2024).
  177. Hyper meta-path contrastive learning for multi-behavior recommendation. In ICDM. IEEE, 787–796.
  178. Knowledge graph self-supervised rationalization for recommendation. In KDD. 3046–3056.
  179. Debiased Contrastive Learning for Sequential Recommendation. In WWW. 1063–1073.
  180. Knowledge graph contrastive learning for recommendation. In SIGIR. 1434–1443.
  181. Enhanced graph learning for collaborative filtering via mutual information maximization. In SIGIR. 71–80.
  182. Generative-Contrastive Graph Learning for Recommendation. (2023).
  183. Graph Pre-training and Prompt Learning for Recommendation. In WWW.
  184. Towards robust neural graph collaborative filtering via structure denoising and embedding perturbation. TOIS 41, 3 (2023), 1–28.
  185. Graph Masked Autoencoder for Sequential Recommendation. In SIGIR.
  186. Jing Yi and Zhenzhong Chen. 2021. Multi-modal variational graph auto-encoder for recommendation systems. TMM 24 (2021), 1067–1079.
  187. Multi-modal graph contrastive learning for micro-video recommendation. In SIGIR. 1807–1811.
  188. Graph convolutional neural networks for web-scale recommender systems. In KDD. 974–983.
  189. ROLAND: graph learning framework for dynamic graphs. In KDD. 2358–2366.
  190. Generating reliable friends via adversarial training to improve social recommendation. In ICDM. IEEE, 768–777.
  191. XSimGCL: Towards extremely simple graph contrastive learning for recommendation. TKDE (2023).
  192. Socially-aware self-supervised tri-training for recommendation. In KDD. 2084–2092.
  193. Enhancing social recommendation with adversarial graph convolutional networks. TKDE (2020), 3727–3739.
  194. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In WWW. 413–424.
  195. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In SIGIR. 1294–1303.
  196. Self-supervised learning for recommender systems: A survey. IEEE Transactions on Knowledge and Data Engineering (2023).
  197. Parameter-efficient transfer from sequential behaviors for user modeling and recommendation. In SIGIR. 1469–1478.
  198. Exploring missing interactions: A convolutional generative adversarial network for collaborative filtering. In CIKM. 1773–1782.
  199. Improving sequential recommendation consistency with self-supervised imitation. arXiv (2021).
  200. Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss. Advances in Neural Information Processing Systems 36 (2024).
  201. Empowering Collaborative Filtering with Principled Adversarial Contrastive Loss. NeurIPS 36 (2024).
  202. RecDCL: Dual Contrastive Learning for Recommendation. arXiv (2024).
  203. Double-scale self-supervised hypergraph learning for group recommendation. In CIKM. 2557–2567.
  204. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001 (2023).
  205. Latent structure mining with contrastive modality fusion for multimedia recommendation. TKDE (2022).
  206. Diffusion-based graph contrastive learning for recommendation with implicit feedback. In DASFAA. Springer, 232–247.
  207. A deep dual adversarial network for cross-domain recommendation. TKDE (2021).
  208. Disentangled Contrastive Learning for Cross-Domain Recommendation. In DASFAA. Springer, 163–178.
  209. Deep learning based recommender system: A survey and new perspectives. ACM computing surveys (CSUR) 52, 1 (2019), 1–38.
  210. Enhancing sequential recommendation with graph contrastive learning. arXiv (2022).
  211. Scenario-Adaptive and Self-Supervised Model for Multi-Scenario Personalized Recommendation. In CIKM. 3674–3683.
  212. Variational Self-attention Network for Sequential Recommendation. In ICDE. 1559–1570.
  213. Adversarial oracular seq2seq learning for sequential recommendation. In IJCAI. 1905–1911.
  214. Multi-view intent disentangle graph networks for bundle recommendation. In AAAI. 4379–4387.
  215. A survey of large language models. arXiv (2023).
  216. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In CIKM. 1893–1902.
  217. Selfcf: A simple framework for self-supervised collaborative filtering. TORS 1, 2 (2023), 1–25.
  218. Bootstrap latent representations for multi-modal recommendation. In WWW. 845–854.
  219. AdaMCL: Adaptive fusion multi-view contrastive learning for collaborative filtering. In SIGIR. 1076–1085.
  220. Yaochen Zhu and Zhenzhong Chen. 2022. Mutually-regularized dual collaborative variational auto-encoder for recommendation systems. In WWW. 2379–2387.
  221. Multi-level cross-view contrastive learning for knowledge-aware recommender system. In SIGIR. 1358–1368.
  222. Improving knowledge-aware recommendation with multi-level interactive contrastive learning. In CIKM. 2817–2826.
Citations (5)

Summary

  • The paper provides a comprehensive review of SSL techniques that enhance recommendation accuracy by effectively utilizing unlabeled data.
  • It details methodologies including contrastive, generative, and adversarial learning with clear strategies for handling data sparsity and dynamic user preferences.
  • The study highlights future directions such as foundation recommender models and integrating large language models to address evolving challenges in recommendations.

A Comprehensive Survey on Self-Supervised Learning for Recommendation Systems

Introduction to Self-Supervised Learning in Recommendation Systems

Self-Supervised Learning (SSL) has emerged as a promising solution to offset the limitations imposed by labeled data scarcity, extending its appeal to a variety of domains including recommendation systems. This survey explores the intersection of SSL and recommendation systems, highlighting how SSL techniques are employed to enhance recommendation accuracy by leveraging unlabeled data. The unique challenges in recommendation systems such as data sparsity and dynamic user preferences underscore the necessity for innovative approaches that SSL offers.

SSL Paradigms in Recommendation Systems

Contrastive Learning

A predominant SSL approach in recommendation systems is contrastive learning, which aims at learning representations by contrasting between similar (positive) and dissimilar (negative) instances. Various strategies for view creation, pair sampling, and objective function optimization have been documented. Techniques range from data and feature-based view creation to model-based approaches which leverage neural networks for generating diverse views. The use of InfoNCE-based or JS-based objectives for optimizing these models is commonplace, aiming to embed representations that encapsulate user-item interaction patterns effectively.

Generative Learning

Another pivotal SSL paradigm is generative learning, focusing on generating or reconstructing part of the input data. This approach has found utility in recommendation systems where the generation targets can vary from user-item interaction matrices to knowledge graph triplets. The application of generative models like variational autoencoders (VAEs) and denoised diffusion models has shown effectiveness in capturing the underlying distribution of user-item interactions and knowledge graph entities, respectively, facilitating enhanced recommendation performance.

Adversarial Learning

Adversarial learning introduces a competitive setup between generative and discriminative models to improve the recommendation quality. Instrumental in this paradigm is the strategic generation of realistic yet synthetic user-item interactions or features, which are subsequently discriminated by the adversary. This min-max optimization exercise has proven beneficial, especially in scenarios such as cross-domain and multi-modal recommendation systems, where the transfer and fusion of knowledge across domains or modalities are crucial.

Implications and Future Directions

The exploration of SSL in recommendation systems has unveiled a spectrum of methodologies that effectively utilize unlabeled data to surmount challenges inherent to the domain. The survey not only categorizes the existing SSL paradigms but also identifies promising directions such as the development of foundation recommender models and the integration of LLMs. Moreover, addressing the dynamic nature of recommendation environments and constructing theoretical foundations for emerging SSL paradigms remain pivotal areas for future research.

In essence, SSL stands as a cornerstone technology that significantly propels the frontier of recommendation systems toward addressing key challenges of data scarcity and improving prediction accuracy. As we move forward, the continuous evolution and innovative application of SSL paradigms hold the potential to redefine the landscape of recommendation systems.

Github Logo Streamline Icon: https://streamlinehq.com