Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aligning Large Language Models for Controllable Recommendations (2403.05063v2)

Published 8 Mar 2024 in cs.IR and cs.AI

Abstract: Inspired by the exceptional general intelligence of LLMs, researchers have begun to explore their application in pioneering the next generation of recommender systems - systems that are conversational, explainable, and controllable. However, existing literature primarily concentrates on integrating domain-specific knowledge into LLMs to enhance accuracy, often neglecting the ability to follow instructions. To address this gap, we initially introduce a collection of supervised learning tasks, augmented with labels derived from a conventional recommender model, aimed at explicitly improving LLMs' proficiency in adhering to recommendation-specific instructions. Subsequently, we develop a reinforcement learning-based alignment procedure to further strengthen LLMs' aptitude in responding to users' intentions and mitigating formatting errors. Through extensive experiments on two real-world datasets, our method markedly advances the capability of LLMs to comply with instructions within recommender systems, while sustaining a high level of accuracy performance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23, page 1007–1014, New York, NY, USA. Association for Computing Machinery.
  2. Zheng Chen. 2023. Palr: Personalization aware llms for recommendation. arXiv preprint arXiv:2305.07622.
  3. Uncovering chatgpt’s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23, page 1126–1132, New York, NY, USA. Association for Computing Machinery.
  4. Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046.
  5. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524.
  6. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, RecSys ’22, page 299–315, New York, NY, USA. Association for Computing Machinery.
  7. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
  8. Wang-Cheng Kang and Julian J. McAuley. 2018. Self-attentive sequential recommendation. In IEEE International Conference on Data Mining, ICDM 2018, Singapore, November 17-20, 2018, pages 197–206. IEEE Computer Society.
  9. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149.
  10. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
  11. U-bert: Pre-training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4320–4327.
  12. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1).
  13. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
  14. Is chatgpt good at search? investigating large language models as re-ranking agents. In EMNLP, pages 14918–14937. Association for Computational Linguistics.
  15. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  16. Userbert: Pre-training user model with contrastive self-supervision. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, page 2087–2092, New York, NY, USA. Association for Computing Machinery.
  17. A survey on large language models for recommendation. arXiv preprint arXiv:2305.19860.
  18. Agenttuning: Enabling generalized agent abilities for llms. arXiv preprint arXiv:2310.12823.
  19. Recommendation as instruction following: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wensheng Lu (4 papers)
  2. Jianxun Lian (39 papers)
  3. Wei Zhang (1489 papers)
  4. Guanghua Li (4 papers)
  5. Mingyang Zhou (27 papers)
  6. Hao Liao (34 papers)
  7. Xing Xie (220 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com