Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PALR: Personalization Aware LLMs for Recommendation (2305.07622v3)

Published 12 May 2023 in cs.IR, cs.AI, and cs.CL
PALR: Personalization Aware LLMs for Recommendation

Abstract: LLMs have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various NLP tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR, which aiming to combine user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval. Then we adopt a LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized LLMs (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instruction which explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.

Introduction to PALR

A paper presents a new framework called PALR (Personalization Aware LLMs for Recommendation), designed to enhance recommenders systems by integrating users' historical interactions—such as clicks, purchases, and ratings—with LLMs to generate preferred item recommendations for users. The authors propose a novel approach to utilizing LLMs for recommendations, emphasizing the importance of user personalization.

PALR: A Novel Recommendation Framework

The essence of the PALR framework is a multi-step process that first generates user profiles using an LLM based on their interactions with items. A retrieval module then pre-filters candidates from the vast pool of items based on these profiles. Importantly, any retrieval algorithm can be employed in this stage. Finally, the LLM is used to rank these candidates according to the user's historical behaviors.

Fine-Tuning LLM for Task Specificity

Critical to PALR's success is fine-tuning a 7-billion-parameter LLM (the LLaMa model) to accommodate the peculiarities of recommendation tasks. This process includes converting user behavior into natural language prompts that the model can understand during training, imparting the ability to discern patterns in user engagement and thus generate relevant item recommendations. The framework's flexibility was tested using two different datasets and displayed superior performance to existing state-of-the-art models in various sequential recommendation tasks.

Experimental Results and Future Implications

Experiments conducted on two public datasets, MovieLens-1M and Amazon Beauty, demonstrated PALR's significant outperformance over state-of-the-art methods. Notably, PALR showcased its effectiveness in re-ranking items, suggesting substantial improvements in the context of sequential recommendations when compared to traditional approaches. The findings encourage future exploration into optimizing LLMs for recommendation tasks, aiming to balance their powerful capabilities with the need for computational efficiency and reduced latency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fan Yang (877 papers)
  2. Zheng Chen (221 papers)
  3. Ziyan Jiang (16 papers)
  4. Eunah Cho (12 papers)
  5. Xiaojiang Huang (9 papers)
  6. Yanbin Lu (5 papers)
Citations (83)
Youtube Logo Streamline Icon: https://streamlinehq.com