Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mutual Harmony: Sequential Recommendation with Dual Contrastive Network (2209.08446v4)

Published 18 Sep 2022 in cs.IR

Abstract: With the outbreak of today's streaming data, the sequential recommendation is a promising solution to achieve time-aware personalized modeling. It aims to infer the next interacted item of a given user based on the historical item sequence. Some recent works tend to improve the sequential recommendation via random masking on the historical item so as to generate self-supervised signals. But such approaches will indeed result in sparser item sequence and unreliable signals. Besides, the existing sequential recommendation models are only user-centric, i.e., based on the historical items by chronological order to predict the probability of candidate items, which ignores whether the items from a provider can be successfully recommended. Such user-centric recommendation will make it impossible for the provider to expose their new items, failing to consider the accordant interactions between user and item dimensions. In this paper, we propose a novel Dual Contrastive Network (DCN) to achieve mutual harmony between user and item provider, generating ground-truth self-supervised signals for sequential recommendation by auxiliary user-sequence from an item-centric dimension. Specifically, we propose dual representation contrastive learning to refine the representation learning by minimizing the Euclidean distance between the representations of a given user/item and historical items/users of them. Before the second contrastive learning module, we perform the next user prediction to capture the trends of items preferred by certain types of users and provide personalized exploration opportunities for item providers. Finally, we further propose dual interest contrastive learning to self-supervise the dynamic interest from the next item/user prediction and static interest of matching probability. Experiments on four benchmark datasets verify the effectiveness of our proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Guanyu Lin (9 papers)
  2. Chen Gao (136 papers)
  3. Yinfeng Li (10 papers)
  4. Yu Zheng (196 papers)
  5. Zhiheng Li (67 papers)
  6. Depeng Jin (72 papers)
  7. Dong Li (429 papers)
  8. Jianye Hao (185 papers)
  9. Yong Li (628 papers)

Summary

We haven't generated a summary for this paper yet.