Papers
Topics
Authors
Recent
Search
2000 character limit reached

Language-Based User Profiles for Recommendation

Published 23 Feb 2024 in cs.CL, cs.HC, cs.IR, and cs.LG | (2402.15623v1)

Abstract: Most conventional recommendation methods (e.g., matrix factorization) represent user profiles as high-dimensional vectors. Unfortunately, these vectors lack interpretability and steerability, and often perform poorly in cold-start settings. To address these shortcomings, we explore the use of user profiles that are represented as human-readable text. We propose the Language-based Factorization Model (LFM), which is essentially an encoder/decoder model where both the encoder and the decoder are LLMs. The encoder LLM generates a compact natural-language profile of the user's interests from the user's rating history. The decoder LLM uses this summary profile to complete predictive downstream tasks. We evaluate our LFM approach on the MovieLens dataset, comparing it against matrix factorization and an LLM model that directly predicts from the user's rating history. In cold-start settings, we find that our method can have higher accuracy than matrix factorization. Furthermore, we find that generating a compact and human-readable summary often performs comparably with or better than direct LLM prediction, while enjoying better interpretability and shorter model input length. Our results motivate a number of future research directions and potential improvements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Zheng Chen. 2023. PALR: Personalization Aware LLMs for Recommendation. arXiv preprint arXiv:2305.07622 (2023).
  2. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945 (2023).
  3. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). arXiv:2203.13366 [cs.IR]
  4. LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [cs.CL]
  5. Nicolas Hug. 2020. Surprise: A Python library for recommender systems. Journal of Open Source Software 5, 52 (2020), 2174. https://doi.org/10.21105/joss.02174 Publisher: The Open Journal.
  6. Revisiting the Tag Relevance Prediction Problem. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’21). Association for Computing Machinery, New York, NY, USA, 1768–1772. https://doi.org/10.1145/3404835.3463019
  7. Teach LLMs to Personalize–An Approach inspired by Writing Education. arXiv preprint arXiv:2308.07968 (2023).
  8. CTRL: Connect Collaborative and Language Model for CTR Prediction. arXiv:2306.02841 [cs.IR]
  9. Is ChatGPT a Good Recommender? A Preliminary Study. arXiv:2304.10149 [cs.IR]
  10. LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv preprint arXiv:2307.15780 (2023).
  11. Representation Learning with Large Language Models for Recommendation. arXiv:2310.15950 [cs.IR]
  12. Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences. arXiv:2307.14225 [cs.IR]
  13. Llama 2: Open Foundation and Fine-Tuned Chat Models. https://doi.org/10.48550/arXiv.2307.09288 arXiv:2307.09288 [cs].
  14. The Tag Genome: Encoding Community Knowledge to Support Novel Interaction. ACM Transactions on Interactive Intelligent Systems 2, 3 (Sept. 2012), 13:1–13:44. https://doi.org/10.1145/2362394.2362395
  15. Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models. arXiv:2306.10933 [cs.IR]
  16. OpenP5: Benchmarking Foundation Models for Recommendation. arXiv:2306.11134 [cs.IR]
Citations (7)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

HackerNews