Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-Rec: Personalized Recommendation via Prompting Large Language Models (2307.15780v3)

Published 24 Jul 2023 in cs.CL, cs.AI, and cs.IR

Abstract: Text-based recommendation holds a wide range of practical applications due to its versatility, as textual descriptions can represent nearly any type of item. However, directly employing the original item descriptions may not yield optimal recommendation performance due to the lack of comprehensive information to align with user preferences. Recent advances in LLMs have showcased their remarkable ability to harness commonsense knowledge and reasoning. In this study, we introduce a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations. Our empirical experiments reveal that using LLM-augmented text significantly enhances recommendation quality. Even basic MLP (Multi-Layer Perceptron) models achieve comparable or even better results than complex content-based methods. Notably, the success of LLM-Rec lies in its prompting strategies, which effectively tap into the LLM's comprehension of both general and specific item characteristics. This highlights the importance of employing diverse prompts and input augmentation techniques to boost the recommendation effectiveness of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hanjia Lyu (53 papers)
  2. Song Jiang (66 papers)
  3. Hanqing Zeng (17 papers)
  4. Qifan Wang (129 papers)
  5. Si Zhang (22 papers)
  6. Ren Chen (7 papers)
  7. Jiajie Tang (2 papers)
  8. Yinglong Xia (23 papers)
  9. Jiebo Luo (355 papers)
  10. Christopher Leung (3 papers)
Citations (36)