Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLMRec: Benchmarking Large Language Models on Recommendation Task (2308.12241v1)

Published 23 Aug 2023 in cs.IR and cs.AI

Abstract: Recently, the fast development of LLMs such as ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. However, the application of LLMs in the recommendation domain has not been thoroughly investigated. To bridge this gap, we propose LLMRec, a LLM-based recommender system designed for benchmarking LLMs on various recommendation tasks. Specifically, we benchmark several popular off-the-shelf LLMs, such as ChatGPT, LLaMA, ChatGLM, on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Furthermore, we investigate the effectiveness of supervised finetuning to improve LLMs' instruction compliance ability. The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation. However, they demonstrated comparable performance to state-of-the-art methods in explainability-based tasks. We also conduct qualitative evaluations to further evaluate the quality of contents generated by different models, and the results show that LLMs can truly understand the provided information and generate clearer and more reasonable results. We aspire that this benchmark will serve as an inspiration for researchers to delve deeper into the potential of LLMs in enhancing recommendation performance. Our codes, processed data and benchmark results are available at https://github.com/williamliujl/LLMRec.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Junling Liu (9 papers)
  2. Chao Liu (358 papers)
  3. Peilin Zhou (34 papers)
  4. Qichen Ye (12 papers)
  5. Dading Chong (19 papers)
  6. Kang Zhou (74 papers)
  7. Yueqi Xie (22 papers)
  8. Yuwei Cao (13 papers)
  9. Shoujin Wang (40 papers)
  10. Chenyu You (66 papers)
  11. Philip S. Yu (592 papers)
Citations (22)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub