Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Disentangling Past-Future Modeling in Sequential Recommendation via Dual Networks (2210.14577v2)

Published 26 Oct 2022 in cs.IR

Abstract: Sequential recommendation (SR) plays an important role in personalized recommender systems because it captures dynamic and diverse preferences from users' real-time increasing behaviors. Unlike the standard autoregressive training strategy, future data (also available during training) has been used to facilitate model training as it provides richer signals about user's current interests and can be used to improve the recommendation quality. However, these methods suffer from a severe training-inference gap, i.e., both past and future contexts are modeled by the same encoder when training, while only historical behaviors are available during inference. This discrepancy leads to potential performance degradation. To alleviate the training-inference gap, we propose a new framework DualRec, which achieves past-future disentanglement and past-future mutual enhancement by a novel dual network. Specifically, a dual network structure is exploited to model the past and future context separately. And a bi-directional knowledge transferring mechanism enhances the knowledge learnt by the dual network. Extensive experiments on four real-world datasets demonstrate the superiority of our approach over baseline methods. Besides, we demonstrate the compatibility of DualRec by instantiating using RNN, Transformer, and filter-MLP as backbones. Further empirical analysis verifies the high utility of modeling future contexts under our DualRec framework. The code of DualRec is publicly available at https://github.com/zhy99426/DualRec.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hengyu Zhang (15 papers)
  2. Enming Yuan (5 papers)
  3. Wei Guo (222 papers)
  4. Zhicheng He (47 papers)
  5. Jiarui Qin (24 papers)
  6. Huifeng Guo (60 papers)
  7. Bo Chen (309 papers)
  8. Xiu Li (166 papers)
  9. Ruiming Tang (172 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.