EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations (2405.11441v2)
Abstract: Content-based recommendation systems play a crucial role in delivering personalized content to users in the digital world. In this work, we introduce EmbSum, a novel framework that enables offline pre-computations of users and candidate items while capturing the interactions within the user engagement history. By utilizing the pretrained encoder-decoder model and poly-attention layers, EmbSum derives User Poly-Embedding (UPE) and Content Poly-Embedding (CPE) to calculate relevance scores between users and candidate items. EmbSum actively learns the long user engagement histories by generating user-interest summary with supervision from LLM. The effectiveness of EmbSum is validated on two datasets from different domains, surpassing state-of-the-art (SoTA) methods with higher accuracy and fewer parameters. Additionally, the model's ability to generate summaries of user interests serves as a valuable by-product, enhancing its usefulness for personalized content recommendations.
- Chiyu Zhang (35 papers)
- Yifei Sun (70 papers)
- Minghao Wu (31 papers)
- Jun Chen (376 papers)
- Jie Lei (52 papers)
- Muhammad Abdul-Mageed (102 papers)
- Rong Jin (164 papers)
- Angli Liu (4 papers)
- Ji Zhu (63 papers)
- Sem Park (8 papers)
- Ning Yao (7 papers)
- Bo Long (60 papers)