Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GenRec: Large Language Model for Generative Recommendation (2307.00457v2)

Published 2 Jul 2023 in cs.IR, cs.AI, cs.CL, and cs.LG

Abstract: In recent years, LLMs (LLM) have emerged as powerful tools for diverse natural language processing tasks. However, their potential for recommender systems under the generative recommendation paradigm remains relatively unexplored. This paper presents an innovative approach to recommendation systems using LLMs based on text data. In this paper, we present a novel LLM for generative recommendation (GenRec) that utilized the expressive power of LLM to directly generate the target item to recommend, rather than calculating ranking score for each candidate item one by one as in traditional discriminative recommendation. GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation. Our proposed approach leverages the vast knowledge encoded in LLMs to accomplish recommendation tasks. We first we formulate specialized prompts to enhance the ability of LLM to comprehend recommendation tasks. Subsequently, we use these prompts to fine-tune the LLaMA backbone LLM on a dataset of user-item interactions, represented by textual data, to capture user preferences and item characteristics. Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems and offers a foundational framework for future explorations in this field. We conduct extensive experiments on benchmark datasets, and the experiments shows that our GenRec has significant better results on large dataset.

GenRec: LLM for Generative Recommendation

The paper "GenRec: LLM for Generative Recommendation" presents an innovative approach to recommender systems by integrating LLMs through a generative recommendation paradigm. The primary focus is leveraging the extensive capabilities of LLMs to directly generate recommendations based on user interactions and item characteristics captured in textual data.

Overview of GenRec

The innovative aspect of GenRec lies in its departure from traditional discriminative recommendation systems, where each candidate item is assessed individually through ranking scores. Instead, GenRec utilizes the inherent expressive power of LLMs to generate the target item for recommendation directly, offering a paradigm that potentially elevates efficiency and personalization in recommender systems.

The process involves formulating specialized prompts to enhance the comprehension abilities of the LLM regarding recommendation tasks. These prompts are used to fine-tune the LLaMA backbone LLM, a model noted for its flexibility and adaptability, on datasets comprising user-item interactions documented as text. Such an approach allows the capture of both user preferences and item attributes effectively, with GenRec demonstrating significant improvement over traditional methods, particularly when operating on large datasets.

Methodology and Experimental Results

The paper elaborates on the architecture of GenRec, highlighting its method of sequence generation, where user-item interaction sequences are reformatted through specifically designed prompts. This formatting accounts for the rich semantic information embedded in item names, potentially enhancing the model's ability to recommend accurately. The training employed the LLaMA-LoRA architecture, optimizing resource usage and ensuring efficient training on limited GPU capacity.

Experimental comparisons against baseline methods such as P5 showcased strong performance in scenarios utilizing the MovieLens 25M dataset, indicating GenRec's adeptness in extracting and leveraging rich interaction data. However, P5 maintained a slight advantage in scenarios typical of Amazon datasets, suggesting areas for potential refinement within the GenRec framework.

Implications and Future Directions

The implications of this paper are twofold: practical and theoretical. Practically, GenRec promises enhanced recommendation systems capable of delivering more personalized user experiences by understanding complex interaction patterns through textual data. Theoretically, it underscores the potential of generative approaches to challenge and possibly outperform traditional paradigms grounded in discriminative methodologies.

Future directions for GenRec include refining sequence generation with more sophisticated prompt formulations and expanding input data to include complex interaction types like ratings or reviews. Testing GenRec with different LLM frameworks could also reveal specific benefits and compromises associated with varying model architectures, suggesting a pathway to further improvements in recommendation systems.

Conclusion

The paper convincingly argues for the integration of LLMs into the recommendation domain, demonstrating that a generative approach can significantly enhance recommendation efficacy. By focusing on the rich semantic layers within textual data, GenRec provides a promising foundation for further exploration and potential innovation across the field of recommendation systems, offering a glimpse at the future of personalized digital experiences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Justin Basilico and Thomas Hofmann. 2004. Unifying collaborative and content-based filtering. In Proceedings of the twenty-first international conference on Machine learning. 9.
  2. Keyword searching and browsing in databases using BANKS. In Proceedings 18th international conference on data engineering. IEEE, 431–440.
  3. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems. 299–315.
  4. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5, 4 (2015), 1–19.
  5. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
  6. Grouplens: Applying collaborative filtering to usenet news. Commun. ACM 40, 3 (1997), 77–87.
  7. Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In International Conference on Learning Representations. https://openreview.net/forum?id=Bkg6RiCqY7
  8. Andriy Mnih and Russ R Salakhutdinov. 2007. Probabilistic matrix factorization. Advances in neural information processing systems 20 (2007).
  9. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 188–197.
  10. Keiron O’Shea and Ryan Nash. 2015. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458 (2015).
  11. Michael J Pazzani. 1999. A framework for collaborative, content-based and demographic filtering. Artificial intelligence review 13 (1999), 393–408.
  12. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551.
  13. Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, Vol. 242. Citeseer, 29–48.
  14. Collaborative filtering recommender systems. The adaptive web: methods and strategies of web personalization (2007), 291–324.
  15. Alex Sherstinsky. 2020. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena 404 (2020), 132306.
  16. Jieun Son and Seoung Bum Kim. 2017. Content-based filtering for recommendation systems using multiattribute networks. Expert Systems with Applications 89 (2017), 404–412.
  17. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  18. Robin Van Meteren and Maarten Van Someren. 2000. Using content-based filtering for recommendation. In Proceedings of the machine learning in the new information age: MLnet/ECML2000 workshop, Vol. 30. Barcelona, 47–56.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jianchao Ji (14 papers)
  2. Zelong Li (24 papers)
  3. Shuyuan Xu (31 papers)
  4. Wenyue Hua (51 papers)
  5. Yingqiang Ge (36 papers)
  6. Juntao Tan (33 papers)
  7. Yongfeng Zhang (163 papers)
Citations (36)
Github Logo Streamline Icon: https://streamlinehq.com