Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential Recommendation with Latent Relations based on Large Language Model (2403.18348v1)

Published 27 Mar 2024 in cs.IR

Abstract: Sequential recommender systems predict items that may interest users by modeling their preferences based on historical interactions. Traditional sequential recommendation methods rely on capturing implicit collaborative filtering signals among items. Recent relation-aware sequential recommendation models have achieved promising performance by explicitly incorporating item relations into the modeling of user historical sequences, where most relations are extracted from knowledge graphs. However, existing methods rely on manually predefined relations and suffer the sparsity issue, limiting the generalization ability in diverse scenarios with varied item relations. In this paper, we propose a novel relation-aware sequential recommendation framework with Latent Relation Discovery (LRD). Different from previous relation-aware models that rely on predefined rules, we propose to leverage the LLM to provide new types of relations and connections between items. The motivation is that LLM contains abundant world knowledge, which can be adopted to mine latent relations of items for recommendation. Specifically, inspired by that humans can describe relations between items using natural language, LRD harnesses the LLM that has demonstrated human-like knowledge to obtain language knowledge representations of items. These representations are fed into a latent relation discovery module based on the discrete state variational autoencoder (DVAE). Then the self-supervised relation discovery tasks and recommendation tasks are jointly optimized. Experimental results on multiple public datasets demonstrate our proposed latent relations discovery method can be incorporated with existing relation-aware sequential recommendation models and significantly improve the performance. Further analysis experiments indicate the effectiveness and reliability of the discovered latent relations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11, 9 (2018), 137.
  2. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems (RecSys ’23). ACM. https://doi.org/10.1145/3604915.3608857
  3. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  4. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The world wide web conference. 151–161.
  5. Sequential recommendation with graph neural networks. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 378–387.
  6. Scaling Instruction-Finetuned Language Models. arXiv:2210.11416 [cs.LG]
  7. Recommender Systems in the Era of Large Language Models (LLMs). arXiv:2307.02046 [cs.IR]
  8. Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System. arXiv:2303.14524 [cs.IR]
  9. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). arXiv:2203.13366 [cs.IR]
  10. Ruining He and Julian McAuley. 2016a. Fusing similarity models with markov chains for sparse sequential recommendation. In 2016 IEEE 16th international conference on data mining (ICDM). IEEE, 191–200.
  11. Ruining He and Julian McAuley. 2016b. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/2872427.2883037
  12. Locker: Locally constrained self-attentive sequential recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3088–3092.
  13. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015).
  14. LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [cs.CL]
  15. Cities: Contextual inference of tail-item embeddings for sequential recommendation. In 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 202–211.
  16. Evaluating Open-Domain Question Answering in the Era of Large Language Models. arXiv preprint arXiv:2305.06984 (2023).
  17. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206.
  18. On the Sentence Embeddings from Pre-trained Language Models. arXiv:2011.05864 [cs.CL]
  19. Neural attentive session-based recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 1419–1428.
  20. Time interval aware self-attention for sequential recommendation. In Proceedings of the 13th international conference on web search and data mining. 322–330.
  21. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023).
  22. ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. arXiv:2305.06566 [cs.IR]
  23. One Person, One Model—Learning Compound Router for Sequential Recommendation. In 2022 IEEE International Conference on Data Mining (ICDM). 289–298. https://doi.org/10.1109/ICDM54844.2022.00039
  24. Jointly learning explainable rules for recommendation with knowledge graph. In The world wide web conference. 1210–1221.
  25. Diego Marcheggiani and Ivan Titov. 2016. Discrete-state variational autoencoders for joint discovery and factorization of relations. Transactions of the Association for Computational Linguistics 4 (2016), 231–244.
  26. Image-based Recommendations on Styles and Substitutes. arXiv:1506.04757 [cs.CV]
  27. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  28. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
  29. Personalizing session-based recommendations with hierarchical recurrent neural networks. In proceedings of the Eleventh ACM Conference on Recommender Systems. 130–137.
  30. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:1910.10683 [cs.LG]
  31. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618 (2012).
  32. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th international conference on World wide web. 811–820.
  33. Deepjyoti Roy and Mala Dutta. 2022. A systematic review and research perspective on recommender systems. Journal of Big Data 9, 1 (2022), 59.
  34. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web. 285–295.
  35. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.
  36. Jiaxi Tang and Ke Wang. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining. 565–573.
  37. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  38. Toward dynamic user intention: Temporal evolutionary effects of item relations in sequential recommendation. ACM Transactions on Information Systems (TOIS) 39, 2 (2020), 1–33.
  39. Make it a chorus: knowledge-and time-aware item modeling for sequential recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 109–118.
  40. DKN: Deep knowledge-aware network for news recommendation. In Proceedings of the 2018 world wide web conference. 1835–1844.
  41. Multi-task feature learning for knowledge graph enhanced recommendation. In The world wide web conference. 2000–2010.
  42. Sequential recommender systems: challenges, progress and prospects. arXiv preprint arXiv:2001.04830 (2019).
  43. Recurrent recommender networks. In Proceedings of the tenth ACM international conference on web search and data mining. 495–503.
  44. Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 346–353.
  45. Relational collaborative filtering: Modeling multiple item relations for recommendation. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval. 125–134.
  46. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. arXiv:1412.6575 [cs.CL]
  47. A Simple Convolutional Generative Network for Next Item Recommendation. arXiv:1808.05163 [cs.IR]
  48. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 353–362.
  49. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv:2305.07001 [cs.IR]
  50. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 (2022).
  51. Benchmarking large language models for news summarization. arXiv preprint arXiv:2301.13848 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shenghao Yang (45 papers)
  2. Weizhi Ma (43 papers)
  3. Peijie Sun (48 papers)
  4. Qingyao Ai (113 papers)
  5. Yiqun Liu (131 papers)
  6. Mingchen Cai (9 papers)
  7. Min Zhang (630 papers)
Citations (5)