Enhancing Recommender Systems Using Textual Embeddings from Pre-trained Language Models (2504.08746v1)
Abstract: Recent advancements in LLMs and pre-trained LLMs like BERT and RoBERTa have revolutionized natural language processing, enabling a deeper understanding of human-like language. In this paper, we explore enhancing recommender systems using textual embeddings from pre-trained LLMs to address the limitations of traditional recommender systems that rely solely on explicit features from users, items, and user-item interactions. By transforming structured data into natural language representations, we generate high-dimensional embeddings that capture deeper semantic relationships between users, items, and contexts. Our experiments demonstrate that this approach significantly improves recommendation accuracy and relevance, resulting in more personalized and context-aware recommendations. The findings underscore the potential of PLMs to enhance the effectiveness of recommender systems.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.