Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recommender Systems with Generative Retrieval (2305.05065v3)

Published 8 May 2023 in cs.IR and cs.LG

Abstract: Modern recommender systems perform large-scale retrieval by first embedding queries and item candidates in the same unified space, followed by approximate nearest neighbor search to select top candidates given a query embedding. In this paper, we propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates. To that end, we create semantically meaningful tuple of codewords to serve as a Semantic ID for each item. Given Semantic IDs for items in a user session, a Transformer-based sequence-to-sequence model is trained to predict the Semantic ID of the next item that the user will interact with. To the best of our knowledge, this is the first Semantic ID-based generative model for recommendation tasks. We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets. In addition, we show that incorporating Semantic IDs into the sequence-to-sequence model enhances its ability to generalize, as evidenced by the improved retrieval performance observed for items with no prior interaction history.

Overview of "Recommender Systems with Generative Retrieval"

The paper "Recommender Systems with Generative Retrieval" introduces an innovative approach to enhancing the capabilities of modern recommender systems. The authors propose a novel methodology known as Transformer Index for GEnerative Recommenders (TIGER), which integrates generative retrieval techniques with semantic representation of items, aiming to outperform traditional state-of-the-art recommenders. This work extends the existing paradigm by using semantic identifiers (Semantic IDs) and harnessing the power of sequence-to-sequence models, thereby addressing some limitations associated with traditional retrieval methods.

Main Contributions

The paper makes several key contributions to the field of recommender systems:

  1. Generative Retrieval Framework: The authors propose a unique framework that shifts from the traditional retrieve-and-rank approach to a system that generates the identifiers of items directly. This framework exploits a sequence-to-sequence model that uses transformer architecture to predict the Semantic IDs of items that a user is likely to interact with next.
  2. Semantic ID Generation: Each item is represented using Semantic IDs, formed by semantically meaningful tuples of codewords. These Semantic IDs are generated through residual quantization, specifically using Residual-Quantized Variational AutoEncoder (RQ-VAE), which captures semantic relationships between items and improves the generalization capability of the model.
  3. Improved Performance: The proposed TIGER model demonstrates significant improvements over traditional state-of-the-art models, achieving higher recall and Normalized Discounted Cumulative Gain (NDCG) metrics across multiple datasets. The ability to generalize effectively to items with no prior interaction history is particularly noteworthy.
  4. New Capabilities: The approach enables two additional functionalities: cold-start recommendations, facilitating recommendations of newly added or infrequent items, and enhanced recommendation diversity through a tunable parameter that allows control over generated recommendations.

Analysis and Performance

The paper includes detailed experimentation on three Amazon Product Reviews datasets, illustrating that TIGER outperforms several contemporary systems like SASRec, S3-Rec, and BERT4Rec. The authors report improvements in standard evaluation metrics, including Recall@5, Recall@10, NDCG@5, and NDCG@10, which are critical measures of recommendation effectiveness and relevance. These compelling results underscore the potential of generative retrieval methodologies in transitioning recommender systems beyond the current state-of-the-art.

Theoretical and Practical Implications

The implications of this research are manifold. Theoretically, the paper expands the understanding of generative approaches in recommender systems, demonstrating the efficacy of transformer-based sequence-to-sequence models. Practically, it offers a scalable solution for real-world applications where item catalogs are dynamic and user interactions evolve over time. The shift from learning individual item embeddings to utilizing the semantic space offers potential reductions in computational overhead, particularly concerning memory usage for storing embeddings.

Future Directions

While the paper successfully illustrates the advantages of the TIGER approach, several avenues for future work remain. These include optimizing the inference cost associated with generative models and exploring ways to handle scenarios where generated Semantic IDs do not map to valid items. Additionally, future research might investigate further the integration of rich content features and user behavior signals, potentially leading to even more personalized and accurate recommendations.

Conclusion

In summary, "Recommender Systems with Generative Retrieval" is a significant step forward in the evolution of recommendation methodologies. By leveraging generative models and semantic identification, it presents a robust framework that not only enhances the accuracy of recommendations but also incorporates advanced capabilities necessary for modern applications. This work sets a promising direction for future research on the application of generative models in recommender systems, emphasizing the importance of semantic representations in capturing user intent and item characteristics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Shashank Rajput (17 papers)
  2. Nikhil Mehta (34 papers)
  3. Anima Singh (4 papers)
  4. Raghunandan H. Keshavan (6 papers)
  5. Trung Vu (12 papers)
  6. Lukasz Heldt (8 papers)
  7. Lichan Hong (35 papers)
  8. Yi Tay (94 papers)
  9. Vinh Q. Tran (19 papers)
  10. Jonah Samost (1 paper)
  11. Maciej Kula (7 papers)
  12. Ed H. Chi (74 papers)
  13. Maheswaran Sathiamoorthy (14 papers)
Citations (49)
Youtube Logo Streamline Icon: https://streamlinehq.com