Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repeated Padding as Data Augmentation for Sequential Recommendation (2403.06372v1)

Published 11 Mar 2024 in cs.IR

Abstract: Sequential recommendation aims to provide users with personalized suggestions based on their historical interactions. When training sequential models, padding is a widely adopted technique for two main reasons: 1) The vast majority of models can only handle fixed-length sequences; 2) Batching-based training needs to ensure that the sequences in each batch have the same length. The special value \emph{0} is usually used as the padding content, which does not contain the actual information and is ignored in the model calculations. This common-sense padding strategy leads us to a problem that has never been explored before: \emph{Can we fully utilize this idle input space by padding other content to further improve model performance and training efficiency?} In this paper, we propose a simple yet effective padding method called \textbf{Rep}eated \textbf{Pad}ding (\textbf{RepPad}). Specifically, we use the original interaction sequences as the padding content and fill it to the padding positions during model training. This operation can be performed a finite number of times or repeated until the input sequences' length reaches the maximum limit. Our RepPad can be viewed as a sequence-level data augmentation strategy. Unlike most existing works, our method contains no trainable parameters or hyperparameters and is a plug-and-play data augmentation operation. Extensive experiments on various categories of sequential models and five real-world datasets demonstrate the effectiveness and efficiency of our approach. The average recommendation performance improvement is up to 60.3\% on GRU4Rec and 24.3\% on SASRec. We also provide in-depth analysis and explanation of what makes RepPad effective from multiple perspectives. The source code will be released to ensure the reproducibility of our experiments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. TiCoSeRec: Augmenting Data to Uniform Sequences by Time Intervals for Effective Recommendation. TKDE (2023).
  2. Uniform sequence better: Time interval aware data augmentation for sequential recommendation. In AAAI, Vol. 37. 4225–4232.
  3. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
  4. Lighter and better: low-rank decomposed self-attention networks for next-item recommendation. In SIGIR. 1733–1737.
  5. Sequential recommendation via stochastic self-attention. In WWW. 2036–2047.
  6. Learnable Model Augmentation Contrastive Learning for Sequential Recommendation. TKDE (2023).
  7. Ruining He and Julian McAuley. 2016. Fusing similarity models with markov chains for sparse sequential recommendation. In ICDM. IEEE, 191–200.
  8. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015).
  9. Modeling personalized item frequency information for next-basket recommendation. In SIGIR. 1071–1080.
  10. Sequential recommendation with bidirectional chronological augmentation of transformer. arXiv preprint arXiv:2112.06460 (2021).
  11. Contrastive Self-supervised Learning in Recommender Systems: A Survey. arXiv preprint arXiv:2303.09902 (2023).
  12. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In ICDM. IEEE, 197–206.
  13. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  14. Walid Krichene and Steffen Rendle. 2020. On sampled metrics for item recommendation. In KDD. 1748–1757.
  15. Neural attentive session-based recommendation. In CIKM. 1419–1428.
  16. Time interval aware self-attention for sequential recommendation. In WSDM. 322–330.
  17. Understanding the disharmony between dropout and batch normalization by variance shift. In CVPR. 2682–2690.
  18. Context-aware sequential recommendation. In ICDM. IEEE, 1053–1058.
  19. Diffusion augmentation for sequential recommendation. In CIKM. 1576–1586.
  20. Contrastive self-supervised sequential recommendation with robust augmentation. arXiv preprint arXiv:2108.06479 (2021).
  21. Augmenting sequential recommendation with pseudo-prior items via reversely pre-training transformer. In SIGIR. 1608–1612.
  22. Inferring networks of substitutable and complementary products. In KDD. 785–794.
  23. Intent Contrastive Learning with Cross Subsequences for Sequential Recommendation. arXiv preprint arXiv:2310.14318 (2023).
  24. Contrastive learning for representation degeneration problem in sequential recommendation. In WSDM. 813–823.
  25. CmnRec: Sequential Recommendations with Chunk-accelerated Memory Network. TKDE (2022).
  26. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237 (2019).
  27. Factorizing personalized markov chains for next-basket recommendation. In WWW. 811–820.
  28. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. TSP 45, 11 (1997), 2673–2681.
  29. An MDP-based recommender system. JMLR 6, 9 (2005).
  30. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In CIKM. 1441–1450.
  31. Improved recurrent neural networks for session-based recommendations. In Proceedings of the 1st workshop on deep learning for recommender systems. 17–22.
  32. Improved recurrent neural networks for session-based recommendations. In DLRS. 17–22.
  33. Jiaxi Tang and Ke Wang. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In WSDM. 565–573.
  34. Periodicity May Be Emanative: Hierarchical Contrastive Learning for Sequential Recommendation. In CIKM. 2442–2451.
  35. Attention is all you need. NIPS 30 (2017).
  36. Learning to Augment for Casual User Recommendation. In WWW. 2183–2194.
  37. Counterfactual data-augmented sequential recommendation. In SIGIR. 347–356.
  38. Contrastive learning for sequential recommendation. arXiv preprint arXiv:2010.14395 (2020).
  39. Contrastive learning for sequential recommendation. In ICDE. IEEE, 1259–1273.
  40. Understanding and improving layer normalization. NIPS 32 (2019).
  41. Self-supervised learning for recommender systems: A survey. TKDE (2023).
  42. A simple convolutional generative network for next item recommendation. In WSDM. 582–590.
  43. S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization. In CIKM. 1893–1902.
  44. Filter-enhanced MLP is all you need for sequential recommendation. In WWW. 2388–2399.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yizhou Dang (9 papers)
  2. Yuting Liu (62 papers)
  3. Enneng Yang (24 papers)
  4. Guibing Guo (35 papers)
  5. Linying Jiang (7 papers)
  6. Xingwei Wang (35 papers)
  7. Jianzhe Zhao (14 papers)