Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Large Language Models for Sequential Recommendation (2309.09261v1)

Published 17 Sep 2023 in cs.IR

Abstract: Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how LLMs, which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.

Leveraging LLMs for Sequential Recommendation

In this paper, the authors investigate the potential of LLMs to improve sequential recommendation systems, which are designed to predict the next likely user interest or action based on a sequence of past interactions. The paper introduces three distinct approaches for integrating LLMs with sequential recommendation systems, evaluates their effectiveness, and discusses the implications of this integration in terms of recommendation accuracy and computational efficiency.

Methodological Overview

The authors propose three methodologies for incorporating LLMs into sequential recommendation models:

  1. Semantic Embedding-Based Recommendations: This approach involves leveraging the rich semantic embeddings provided by LLMs to recommend items that are semantically similar to those in a user's session history. The results show promising performance, particularly in datasets where item names are descriptive of their brand or category. The LLM embeddings are initialized using an OpenAI model and compared using various similarity measures.
  2. Prompt-Based Fine-Tuning: In this methodology, the authors explore the use of a fine-tuned LLM to generate recommendations based on session prompts. Fine-tuning involves adjusting the weights of a base LLM (OpenAI ada model) with domain-specific data consisting of session prompts and completions. This generative approach, while novel, showed mixed results depending on dataset characteristics and challenges such as hallucinated item outputs.
  3. LLM-Enhanced BERT4Rec Model: The paper's most effective strategy was enhancing the BERT4Rec sequential model with LLM-based item embeddings. By initializing BERT4Rec's embedding layer with reduced-dimensional PCA representations of LLM embeddings, the authors achieved significant improvements in accuracy metrics like NDCG and MRR, demonstrating the advantage of semantic richness provided by LLMs in sequential predictions.

Results and Implications

The empirical evaluation conducted across two distinct datasets (Amazon Beauty and Delivery Hero) yielded insightful results:

  • Accuracy Improvements: The embedding-enhanced BERT4Rec model consistently tops accuracy metrics such as NDCG and HR, underscoring the value of integrating semantically rich LLM embeddings with transformational sequential models. This suggests substantial potential for future applications of LLM embeddings across diverse recommendation domains.
  • Variable Performance: The performance of semantic embedding-based recommendations varies greatly between datasets, influenced by item naming conventions and catalog sparsity. This highlights considerations for dataset-specific characteristics when deploying LLM-based recommendation models.
  • Beyond-Accuracy Metrics: Semantic-based approaches led to better item coverage and novelty due to their ability to bypass popularity bias inherent in collaborative filtering models. However, serendipity results aligned closely with traditional metrics, emphasizing the continued importance of personalized recommendations.

Future Directions

The paper opens numerous avenues for further research:

  • Investigating the generalizability of these approaches across different domains and datasets with varied characteristics.
  • Exploring the impact of diverse LLM architectures, including alternative transformer-based models, and their influence on recommendation outcomes.
  • Delving deeper into hybrid models combining strengths of embedding-based and prompt-based methods, potentially leveraging multiple modalities for improved recommendations.

In conclusion, the work demonstrates the potential of LLMs to enrich sequential recommendation models both in terms of semantic understanding and prediction accuracy. Future research can further expand this integration, exploring nuanced applications and cross-domain effectiveness of LLM embeddings within the recommender systems space.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jesse Harte (2 papers)
  2. Wouter Zorgdrager (3 papers)
  3. Panos Louridas (8 papers)
  4. Asterios Katsifodimos (19 papers)
  5. Dietmar Jannach (53 papers)
  6. Marios Fragkoulis (9 papers)
Citations (67)
Youtube Logo Streamline Icon: https://streamlinehq.com