Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models for Generative Recommendation: A Survey and Visionary Discussions (2309.01157v2)

Published 3 Sep 2023 in cs.IR, cs.AI, and cs.CL

Abstract: LLMs (LLM) not only have revolutionized the field of NLP but also have the potential to reshape many other fields, e.g., recommender systems (RS). However, most of the related work treats an LLM as a component of the conventional recommendation pipeline (e.g., as a feature extractor), which may not be able to fully leverage the generative power of LLM. Instead of separating the recommendation process into multiple stages, such as score computation and re-ranking, this process can be simplified to one stage with LLM: directly generating recommendations from the complete pool of items. This survey reviews the progress, methods, and future directions of LLM-based generative recommendation by examining three questions: 1) What generative recommendation is, 2) Why RS should advance to generative recommendation, and 3) How to implement LLM-based generative recommendation for various RS tasks. We hope that this survey can provide the context and guidance needed to explore this interesting and emerging topic.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lei Li (1293 papers)
  2. Yongfeng Zhang (163 papers)
  3. Dugang Liu (22 papers)
  4. Li Chen (590 papers)
Citations (53)
X Twitter Logo Streamline Icon: https://streamlinehq.com