Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preference Distillation for Personalized Generative Recommendation (2407.05033v1)

Published 6 Jul 2024 in cs.IR

Abstract: Recently, researchers have investigated the capabilities of LLMs for generative recommender systems. Existing LLM-based recommender models are trained by adding user and item IDs to a discrete prompt template. However, the disconnect between IDs and natural language makes it difficult for the LLM to learn the relationship between users. To address this issue, we propose a PErsonAlized PrOmpt Distillation (PeaPOD) approach, to distill user preferences as personalized soft prompts. Considering the complexities of user preferences in the real world, we maintain a shared set of learnable prompts that are dynamically weighted based on the user's interests to construct the user-personalized prompt in a compositional manner. Experimental results on three real-world datasets demonstrate the effectiveness of our PeaPOD model on sequential recommendation, top-n recommendation, and explanation generation tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jerome Ramos (4 papers)
  2. Bin Wu (202 papers)
  3. Aldo Lipani (27 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com