Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking (2311.02089v1)

Published 25 Oct 2023 in cs.IR, cs.AI, and cs.CL
LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking

Abstract: Recently, LLMs have exhibited significant progress in language understanding and generation. By leveraging textual features, customized LLMs are also applied for recommendation and demonstrate improvements across diverse recommendation scenarios. Yet the majority of existing methods perform training-free recommendation that heavily relies on pretrained knowledge (e.g., movie recommendation). In addition, inference on LLMs is slow due to autoregressive generation, rendering existing methods less effective for real-time recommendation. As such, we propose a two-stage framework using LLMs for ranking-based recommendation (LlamaRec). In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history. Then, both history and retrieved items are fed to the LLM in text via a carefully designed prompt template. Instead of generating next-item titles, we adopt a verbalizer-based approach that transforms output logits into probability distributions over the candidate items. Therefore, the proposed LlamaRec can efficiently rank items without generating long text. To validate the effectiveness of the proposed framework, we compare against state-of-the-art baseline methods on benchmark datasets. Our experimental results demonstrate the performance of LlamaRec, which consistently achieves superior performance in both recommendation performance and efficiency.

An Evaluation of "LlamaRec: Two-Stage Recommendation using LLMs for Ranking"

The paper entitled "LlamaRec: Two-Stage Recommendation using LLMs for Ranking" proposes a novel framework, LlamaRec, intended to bridge the gap in utilizing LLMs for efficient and effective recommendation systems. The authors' focus lies in addressing the traditional limitations of LLMs, namely inference time and recommendation quality, through a two-stage process involving retrieval and ranking. This methodology positions LlamaRec to utilize the strength of LLMs with improved recommendation performance and efficiency.

Methodological Overview

The framework is delineated into two principal stages:

  1. Retrieval Stage: In the initial phase, a small-scale sequential recommender, specifically LRURec, is employed to efficiently shortlist candidate items. LRURec makes use of linear recurrent units to maximize retrieval efficiency using ID-based predictions.
  2. Ranking Stage: The innovation of the LlamaRec framework becomes evident in this stage, where a LLM-based model, specifically the Llama~2 model, is tasked with ranking the list of candidate items. The proposed ranking mechanism leverages text to better understand user preferences. Unlike the traditional autoregressive sampling methods, LlamaRec introduces a verbalizer-based approach. This approach translates the LLM's output logits directly into the ranking scores for the candidate items, thereby circumventing the computational overhead associated with generating complete item descriptions.

This two-stage approach not only refines the recommendation accuracy compared to other state-of-the-art models but also substantially improves inference efficiency—a crucial factor for real-time applications.

Experimental Results

The experiments conducted span several datasets, including ML-100k, Beauty, and Games, which are well-established benchmarks for recommendation systems. The performance metrics considered are MRR, NDCG, and Recall, evaluated at various cutoff points. LlamaRec consistently demonstrates superior performance across these datasets against conventional models such as NARM, SASRec, and BERT4Rec.

Specifically, LlamaRec exhibits considerable improvements on ML-100k, with notable increases in MRR@5, NDCG@5, and Recall@5 values, thus demonstrating its capability to leverage intricate user-item interactions effectively. Furthermore, the paper highlights LlamaRec's advantages when benchmarked against other LLM-based recommendation approaches, such as PALR and GPT4Rec, showcasing significant gains particularly in the Beauty dataset regarding both recall and NDCG metrics.

Implications and Future Directions

The development of LlamaRec has considerable implications for both theoretical exploration and practical application. The proposed verbalizer-based ranking architecture not only facilitates efficient computation but also sets a foundation for multi-task learning capabilities within LLMs. By avoiding the resource-intensive autoregressive generation process, LlamaRec can be seamlessly integrated to accommodate large-scale practical deployments.

Looking forward, this framework paves the way for further exploration into deep integration of LLMs in recommendation systems, potentially leveraging advancements in model quantization to reduce training overheads. The approach can also inspire future work in enhancing contextual understanding through richer textual representations, thereby enhancing the granularity of preferences inferred by recommendation systems.

Conclusion

The LlamaRec framework exemplifies a significant step forward in the utilization of LLMs for recommender systems by strategically bifurcating the recommendation pipeline into retrieval and ranking stages. Through its innovative approach to efficient inference, LlamaRec not only enhances recommendation quality but also operational efficiency, positioning it as a valuable advancement in the domain of AI-driven recommendation systems. As the field continues to evolve, the principles established within this work may serve as a foundational component in the continuous refinement of recommendation methodologies employing LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhenrui Yue (24 papers)
  2. Sara Rabhi (4 papers)
  3. Gabriel de Souza Pereira Moreira (4 papers)
  4. Dong Wang (628 papers)
  5. Even Oldridge (5 papers)
Citations (28)
Youtube Logo Streamline Icon: https://streamlinehq.com