Emergent Mind

Aligning Large Language Models for Controllable Recommendations

(2403.05063)
Published Mar 8, 2024 in cs.IR and cs.AI

Abstract

Inspired by the exceptional general intelligence of LLMs, researchers have begun to explore their application in pioneering the next generation of recommender systems - systems that are conversational, explainable, and controllable. However, existing literature primarily concentrates on integrating domain-specific knowledge into LLMs to enhance accuracy, often neglecting the ability to follow instructions. To address this gap, we initially introduce a collection of supervised learning tasks, augmented with labels derived from a conventional recommender model, aimed at explicitly improving LLMs' proficiency in adhering to recommendation-specific instructions. Subsequently, we develop a reinforcement learning-based alignment procedure to further strengthen LLMs' aptitude in responding to users' intentions and mitigating formatting errors. Through extensive experiments on two real-world datasets, our method markedly advances the capability of LLMs to comply with instructions within recommender systems, while sustaining a high level of accuracy performance.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Sign up for a free account or log in to generate a summary of this paper:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

References
  1. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23, page 1007–1014, New York, NY, USA. Association for Computing Machinery.
  2. PALR: Personalization Aware LLMs for Recommendation
  3. Uncovering chatgpt’s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23, page 1126–1132, New York, NY, USA. Association for Computing Machinery.
  4. Recommender Systems in the Era of Large Language Models (LLMs)
  5. Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System
  6. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proceedings of the 16th ACM Conference on Recommender Systems, RecSys ’22, page 299–315, New York, NY, USA. Association for Computing Machinery.
  7. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
  8. Wang-Cheng Kang and Julian J. McAuley. 2018. Self-attentive sequential recommendation. In IEEE International Conference on Data Mining, ICDM 2018, Singapore, November 17-20, 2018, pages 197–206. IEEE Computer Society.
  9. Is ChatGPT a Good Recommender? A Preliminary Study
  10. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9
  11. U-bert: Pre-training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4320–4327.
  12. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1).
  13. Proximal Policy Optimization Algorithms
  14. Is chatgpt good at search? investigating large language models as re-ranking agents. In EMNLP, pages 14918–14937. Association for Computational Linguistics.
  15. Llama 2: Open Foundation and Fine-Tuned Chat Models
  16. Userbert: Pre-training user model with contrastive self-supervision. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, page 2087–2092, New York, NY, USA. Association for Computing Machinery.
  17. A Survey on Large Language Models for Recommendation
  18. AgentTuning: Enabling Generalized Agent Abilities for LLMs
  19. Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach

Show All 19