Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Robust Recommenders through Cross-Model Agreement (2105.09605v3)

Published 20 May 2021 in cs.IR and cs.AI

Abstract: Learning from implicit feedback is one of the most common cases in the application of recommender systems. Generally speaking, interacted examples are considered as positive while negative examples are sampled from uninteracted ones. However, noisy examples are prevalent in real-world implicit feedback. A noisy positive example could be interacted but it actually leads to negative user preference. A noisy negative example which is uninteracted because of unawareness of the user could also denote potential positive user preference. Conventional training methods overlook these noisy examples, leading to sub-optimal recommendations. In this work, we propose a novel framework to learn robust recommenders from implicit feedback. Through an empirical study, we find that different models make relatively similar predictions on clean examples which denote the real user preference, while the predictions on noisy examples vary much more across different models. Motivated by this observation, we propose denoising with cross-model agreement(DeCA) which aims to minimize the KL-divergence between the real user preference distributions parameterized by two recommendation models while maximizing the likelihood of data observation. We employ the proposed DeCA on four state-of-the-art recommendation models and conduct experiments on four datasets. Experimental results demonstrate that DeCA significantly improves recommendation performance compared with normal training and other denoising methods. Codes will be open-sourced.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yu Wang (940 papers)
  2. Xin Xin (49 papers)
  3. Zaiqiao Meng (42 papers)
  4. Xiangnan He (200 papers)
  5. Joemon Jose (7 papers)
  6. Fuli Feng (143 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.