Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improve Temporal Awareness of LLMs for Sequential Recommendation (2405.02778v1)

Published 5 May 2024 in cs.IR

Abstract: LLMs have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks. However, it is empirically found that LLMs fall short in recognizing and utilizing temporal information, rendering poor performance in tasks that require an understanding of sequential data, such as sequential recommendation. In this paper, we aim to improve temporal awareness of LLMs by designing a principled prompting framework inspired by human cognitive processes. Specifically, we propose three prompting strategies to exploit temporal information within historical interactions for LLM-based sequential recommendation. Besides, we emulate divergent thinking by aggregating LLM ranking results derived from these strategies. Evaluations on MovieLens-1M and Amazon Review datasets indicate that our proposed method significantly enhances the zero-shot capabilities of LLMs in sequential recommendation tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
  2. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447.
  3. Sequential recommendation with graph neural networks. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pages 378–387.
  4. Zheng Chen. 2023. Palr: Personalization aware llms for recommendation. arXiv preprint arXiv:2305.07622.
  5. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191–198.
  6. Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. arXiv preprint arXiv:2212.10559.
  7. Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046.
  8. Gerhard Fischer. 2001. User modeling in human–computer interaction. User modeling and user-adapted interaction, 11:65–86.
  9. Analysis of temporal structure in sound by the human brain. Nature neuroscience, 1(5):422–427.
  10. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1–19.
  11. Leveraging large language models for sequential recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1096–1102.
  12. Large language models as zero-shot conversational recommenders. arXiv preprint arXiv:2308.10053.
  13. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939.
  14. Learning vector-quantized item representation for transferable sequential recommenders. In Proceedings of the ACM Web Conference 2023, pages 1162–1171.
  15. Towards universal sequence representation learning for recommender systems. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 585–593.
  16. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845.
  17. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In 2018 IEEE international conference on data mining (ICDM), pages 197–206. IEEE.
  18. Do llms understand user preferences? evaluating llms on user rating prediction. arXiv preprint arXiv:2305.06474.
  19. Text is all you need: Learning language representations for sequential recommendation. arXiv preprint arXiv:2305.13731.
  20. Neural attentive session-based recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1419–1428.
  21. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149.
  22. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172.
  23. Julian McAuley. 2022. Personalized machine learning. Cambridge University Press.
  24. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837.
  25. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 188–197.
  26. Anna C Nobre and Freek Van Ede. 2018. Anticipated moments: temporal structure in attention. Nature Reviews Neuroscience, 19(1):34–48.
  27. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th international conference on World wide web, pages 811–820.
  28. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389.
  29. Mark A Runco. 1991. Divergent thinking. Ablex Publishing Corporation Norwood, NJ.
  30. Large language models encode clinical knowledge. Nature, 620(7972):172–180.
  31. Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1441–1450.
  32. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 565–573.
  33. Ruey S Tsay. 2005. Analysis of financial time series. John wiley & sons.
  34. Language models are open knowledge graphs. arXiv preprint arXiv:2010.11967.
  35. Lei Wang and Ee-Peng Lim. 2023. Zero-shot next-item recommendation using large pretrained language models. arXiv preprint arXiv:2304.03153.
  36. Recmind: Large language model powered agent for recommendation. arXiv preprint arXiv:2308.14296.
  37. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
  38. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155.
  39. Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 346–353.
  40. Prompting large language models for recommender systems: A comprehensive framework and empirical analysis. arXiv preprint arXiv:2401.04997.
  41. Feature-level deeper self-attention network for sequential recommendation. In IJCAI, pages 4320–4326.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhendong Chu (15 papers)
  2. Zichao Wang (34 papers)
  3. Ruiyi Zhang (98 papers)
  4. Yangfeng Ji (59 papers)
  5. Hongning Wang (107 papers)
  6. Tong Sun (49 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com