LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application (2405.03988v2)
Abstract: Contemporary recommendation systems predominantly rely on ID embedding to capture latent associations among users and items. However, this approach overlooks the wealth of semantic information embedded within textual descriptions of items, leading to suboptimal performance and poor generalizations. Leveraging the capability of LLMs to comprehend and reason about textual content presents a promising avenue for advancing recommendation systems. To achieve this, we propose an LLM-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge. We address computational complexity concerns by utilizing pretrained LLMs as item encoders and freezing LLM parameters to avoid catastrophic forgetting and preserve open-world knowledge. To bridge the gap between the open-world and collaborative domains, we design a twin-tower structure supervised by the recommendation task and tailored for practical industrial application. Through experiments on the real large-scale industrial dataset and online A/B tests, we demonstrate the efficacy of our approach in industry application. We also achieve state-of-the-art performance on six Amazon Review datasets to verify the superiority of our method.
- Jian Jia (16 papers)
- Yipei Wang (19 papers)
- Yan Li (505 papers)
- Honggang Chen (21 papers)
- Xuehan Bai (3 papers)
- Zhaocheng Liu (34 papers)
- Jian Liang (162 papers)
- Quan Chen (91 papers)
- Han Li (182 papers)
- Peng Jiang (272 papers)
- Kun Gai (125 papers)