Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation (2406.00333v2)

Published 1 Jun 2024 in cs.IR

Abstract: The training paradigm integrating LLMs (LLM) is gradually reshaping sequential recommender systems (SRS) and has shown promising results. However, most existing LLM-enhanced methods rely on rich textual information on the item side and instance-level supervised fine-tuning (SFT) to inject collaborative information into LLM, which is inefficient and limited in many applications. To alleviate these problems, this paper proposes a practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for SRS. Specifically, in the information reconstruction stage, we design a new user-level SFT task for collaborative information injection with the assistance of a pre-trained SRS model, which is more efficient and compatible with limited text information. Our goal is to let LLM learn to reconstruct a corresponding prior preference distribution from each user's interaction sequence, where LLM needs to effectively parse the latent category of each item and the relationship between different items to accomplish this task. In the information augmentation stage, we feed each item into LLM to obtain a set of enhanced embeddings that combine collaborative information and LLM inference capabilities. These embeddings can then be used to help train various future SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec on three SRS benchmark datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Dugang Liu (22 papers)
  2. Shenxian Xian (1 paper)
  3. Xiaolin Lin (2 papers)
  4. Xiaolian Zhang (2 papers)
  5. Hong Zhu (52 papers)
  6. Yuan Fang (146 papers)
  7. Zhen Chen (151 papers)
  8. Zhong Ming (21 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com