Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised Learning for Large-scale Item Recommendations (2007.12865v4)

Published 25 Jul 2020 in cs.LG, cs.IR, and stat.ML

Abstract: Large scale recommender models find most relevant items from huge catalogs, and they play a critical role in modern search and recommendation systems. To model the input space with large-vocab categorical features, a typical recommender model learns a joint embedding space through neural networks for both queries and items from user feedback data. However, with millions to billions of items in the corpus, users tend to provide feedback for a very small set of them, causing a power-law distribution. This makes the feedback data for long-tail items extremely sparse. Inspired by the recent success in self-supervised representation learning research in both computer vision and natural language understanding, we propose a multi-task self-supervised learning (SSL) framework for large-scale item recommendations. The framework is designed to tackle the label sparsity problem by learning better latent relationship of item features. Specifically, SSL improves item representation learning as well as serving as additional regularization to improve generalization. Furthermore, we propose a novel data augmentation method that utilizes feature correlations within the proposed framework. We evaluate our framework using two real-world datasets with 500M and 1B training examples respectively. Our results demonstrate the effectiveness of SSL regularization and show its superior performance over the state-of-the-art regularization techniques. We also have already launched the proposed techniques to a web-scale commercial app-to-app recommendation system, with significant improvements top-tier business metrics demonstrated in A/B experiments on live traffic. Our online results also verify our hypothesis that our framework indeed improves model performance even more on slices that lack supervision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Tiansheng Yao (5 papers)
  2. Xinyang Yi (24 papers)
  3. Derek Zhiyuan Cheng (12 papers)
  4. Felix Yu (62 papers)
  5. Ting Chen (148 papers)
  6. Aditya Menon (6 papers)
  7. Lichan Hong (35 papers)
  8. Ed H. Chi (74 papers)
  9. Steve Tjoa (1 paper)
  10. Jieqi Kang (1 paper)
  11. Evan Ettinger (3 papers)
Citations (47)