Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Large Language Models for Recommendation (2305.19860v5)

Published 31 May 2023 in cs.IR and cs.AI
A Survey on Large Language Models for Recommendation

Abstract: LLMs have emerged as powerful tools in the field of NLP and have recently gained significant attention in the domain of Recommendation Systems (RS). These models, trained on massive amounts of data using self-supervised learning, have demonstrated remarkable success in learning universal representations and have the potential to enhance various aspects of recommendation systems by some effective transfer techniques such as fine-tuning and prompt tuning, and so on. The crucial aspect of harnessing the power of LLMs in enhancing recommendation quality is the utilization of their high-quality representations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. To provide a comprehensive understanding of the existing LLM-based recommendation systems, this survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with the latter being systematically sorted out for the first time. Furthermore, we systematically review and analyze existing LLM-based recommendation systems within each paradigm, providing insights into their methodologies, techniques, and performance. Additionally, we identify key challenges and several valuable findings to provide researchers and practitioners with inspiration. We have also created a GitHub repository to index relevant papers on LLMs for recommendation, https://github.com/WLiK/LLM4Rec.

LLMs for Recommendation Systems: An Analysis

The paper, "A Survey on LLMs for Recommendation," provides a comprehensive examination of the application of LLMs in the domain of Recommendation Systems (RS). The authors aim to categorize and analyze the diverse methodologies and strategies currently employed, while also shedding light on the challenges and future opportunities inherent in this research area.

Overview

LLMs, characterized by Transformer-based architectures with extensive parameters, have prominently advanced the field of NLP. Their capabilities in learning universal representations and extending external knowledge have catalyzed interest in improving RS. Distinct paradigms for incorporating LLMs into RS include Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with this paper providing an inaugural systematic review of the latter.

Findings and Contributions

The paper makes several contributions by cataloging existing methods into distinct paradigms and unraveling insightful findings:

  1. Modeling Paradigms: The paper identifies three main modeling paradigms indicated by the deployment of LLMs:
    • LLM Embeddings + RS: Viewing LLMs as feature extractors to derive item and user embeddings.
    • LLM Tokens + RS: Utilizing generated tokens for semantic mining and decision-making.
    • LLM as RS: Direct transformation of pre-trained LLMs into recommendation engines, using behavior prompts and task instructions.
  2. Discriminative LLMs: The paper highlights BERT and similar models primarily utilized for understanding tasks and deriving embeddings in recommendations. Fine-tuning and prompt tuning practices are discussed in the context of enhancing specific recommendation tasks.
  3. Generative LLMs: This core section elaborates on the promising ability of generative models like ChatGPT to perform recommendation tasks via non-tuning strategies (prompting and in-context learning) and tuning paradigms (fine-tuning, prompt tuning, instruction tuning). The authors delve into the nuances of transforming textual representation tasks into recommendation actions.
  4. Practical Applications: Various domains including e-commerce, news, and recruitment leverage LLMs for enhancing RS through fine-tuned domain adaptation or leveraging LLMs as interactive recommendation mediators.

Challenges and Future Directions

The paper identifies several challenges and potential avenues for future investigation:

  • Biases: Positional and popularity biases in LLM outputs need addressing, with potential fairness concerns arising from training data or demographic biases.
  • Prompt Design: Effective user/item representation, contextual length limitations, and alignment of LLM output formats present complex challenges.
  • Evaluation Metrics and Datasets: Conventional metrics may not fully capture LLM capabilities. There's a need for benchmarks to evaluate models on industrial-scale datasets, avoiding biases from potentially overfitting on standard datasets like MovieLens or Amazon.

Conclusion

The paper provides a foundational understanding of LLM applications in RS. It underscores the need for future research focusing on adaptive prompts, fairness, and harnessing zero/few-shot capabilities, pushing towards scalable, ethical, and more insightful recommendation systems. This work serves as an essential compendium for researchers navigating the intersection of LLMs and RS, delineating a map for future exploration in this promising field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Likang Wu (25 papers)
  2. Zhi Zheng (46 papers)
  3. Zhaopeng Qiu (13 papers)
  4. Hao Wang (1119 papers)
  5. Hongchao Gu (2 papers)
  6. Tingjia Shen (6 papers)
  7. Chuan Qin (43 papers)
  8. Chen Zhu (103 papers)
  9. Hengshu Zhu (66 papers)
  10. Qi Liu (485 papers)
  11. Hui Xiong (244 papers)
  12. Enhong Chen (242 papers)
Citations (232)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews