Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Sequential Recommendation with GPTRec (2306.11114v1)

Published 19 Jun 2023 in cs.IR

Abstract: Sequential recommendation is an important recommendation task that aims to predict the next item in a sequence. Recently, adaptations of LLMs, particularly Transformer-based models such as SASRec and BERT4Rec, have achieved state-of-the-art results in sequential recommendation. In these models, item ids replace tokens in the original LLMs. However, this approach has limitations. First, the vocabulary of item ids may be many times larger than in LLMs. Second, the classical Top-K recommendation approach used by these models may not be optimal for complex recommendation objectives, including auxiliary objectives such as diversity, coverage or coherence. Recent progress in generative LLMs inspires us to revisit generative approaches to address these challenges. This paper presents the GPTRec sequential recommendation model, which is based on the GPT-2 architecture. GPTRec can address large vocabulary issues by splitting item ids into sub-id tokens using a novel SVD Tokenisation algorithm based on quantised item embeddings from an SVD decomposition of the user-item interaction matrix. The paper also presents a novel Next-K recommendation strategy, which generates recommendations item-by-item, considering already recommended items. The Next-K strategy can be used for producing complex interdependent recommendation lists. We experiment with GPTRec on the MovieLens-1M dataset and show that using sub-item tokenisation GPTRec can match the quality of SASRec while reducing the embedding table by 40%. We also show that the recommendations generated by GPTRec on MovieLens-1M using the Next-K recommendation strategy match the quality of SASRec in terms of NDCG@10, meaning that the model can serve as a strong starting point for future research.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. Language Models Are Few-Shot Learners. In Proc. NeurIPS, Vol. 33. 1877–1901.
  2. Aspect Re-distribution for Learning Better Item Embeddings in Sequential Recommendation. In Proc. RecSys. 49–58.
  3. Jaime Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity-Based Reranking for Reordering Documents and Producing Summaries. In Proc. SIGIR. ACM, Melbourne Australia, 335–336.
  4. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv:arXiv:1409.1259
  5. M6-Rec: Generative Pretrained Language Models Are Open-Ended Recommender Systems. arXiv:2205.08084 [cs]
  6. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. of NAACL-HLT. 4171–4186.
  7. Contrastive Learning with Bidirectional Transformers for Sequential Recommendation. In Proc. CIKM. 396–405.
  8. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). In Proc. RecSys. 299–315.
  9. Deep Learning. MIT Press.
  10. F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4 (Dec. 2015), 19:1–19:19.
  11. Session-Based Recommendations with Recurrent Neural Networks. In Proc. ICLR.
  12. ColBERT-FairPRF: Towards Fair Pseudo-Relevance Feedback in Dense Retrieval. In Proc. ECIR. 457–465.
  13. Wang-Cheng Kang and Julian McAuley. 2018. Self-Attentive Sequential Recommendation. In Proc. ICDM. 197–206.
  14. Differentiable Ranking Metric Using Relaxed Sorting for Top-K Recommendation. Proc. IEEE Access 9 (2021), 114649–114658.
  15. DSI++: Updating Transformer Memory with New Documents. arXiv:2212.09744 [cs]
  16. Training Language Models to Follow Instructions with Human Feedback. arXiv:2203.02155 [cs]
  17. Aleksandr Petrov and Craig Macdonald. 2022a. Effective and Efficient Training for Sequential Recommendation Using Recency Sampling. In Proc. RecSys. 81–91.
  18. Aleksandr Petrov and Craig Macdonald. 2022b. A Systematic Review and Replicability Study of BERT4Rec for Sequential Recommendation. In Proc. RecSys. 436–447.
  19. Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation. In Proc. WSDM. 813–823.
  20. Improving Language Understanding by Generative Pre-Training. ([n. d.]).
  21. Language Models Are Unsupervised Multitask Learners. OpenAI blog (2019).
  22. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67.
  23. Recommender Systems with Generative Retrieval.
  24. Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. In Proc. NeurIPS.
  25. Neural Machine Translation of Rare Words with Subword Units. arXiv:1508.07909 [cs]
  26. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. In Proc. CIKM. 1441–1450.
  27. Learning to Tokenize for Generative Retrieval. arXiv:2304.04171 [cs]
  28. Jiaxi Tang and Ke Wang. 2018. Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding. In Proc. WSDM. 565–573.
  29. Transformer Memory as a Differentiable Search Index. (2022). arXiv:2202.06991 [cs.CL]
  30. Attention Is All You Need. In Proc. NeurIPS.
  31. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. arXiv:1910.03771 [cs]
  32. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv:1609.08144 [cs]
  33. Learning Maximal Marginal Relevance Model via Directly Optimizing Diversity Evaluation Measures. In Proc. SIGIR. 113–122.
  34. A Simple Convolutional Generative Network for Next Item Recommendation. In Proc. WSDM. 582–590.
  35. Accurate and diverse recommendations via eliminating redundant correlations. New Journal of Physics 11, 12 (2009).
  36. DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index. Machine Intelligence Research 20, 2 (2022), 276–288.
  37. Improving Top-K Recommendation via Joint Collaborative Autoencoders. In Proc. WWW.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Aleksandr V. Petrov (6 papers)
  2. Craig Macdonald (49 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com