Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Whole is Better than the Sum: Using Aggregated Demonstrations in In-Context Learning for Sequential Recommendation (2403.10135v1)

Published 15 Mar 2024 in cs.IR, cs.AI, and cs.CL

Abstract: LLMs have shown excellent performance on various NLP tasks. To use LLMs as strong sequential recommenders, we explore the in-context learning approach to sequential recommendation. We investigate the effects of instruction format, task consistency, demonstration selection, and number of demonstrations. As increasing the number of demonstrations in ICL does not improve accuracy despite using a long prompt, we propose a novel method called LLMsRec-Syn that incorporates multiple demonstration users into one aggregated demonstration. Our experiments on three recommendation datasets show that LLMsRec-Syn outperforms state-of-the-art LLM-based sequential recommendation methods. In some cases, LLMsRec-Syn can perform on par with or even better than supervised learning methods. Our code is publicly available at https://github.com/demoleiwang/LLMsRec_Syn.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lei Wang (975 papers)
  2. Ee-Peng Lim (57 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.