Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aligning Language Models for Versatile Text-based Item Retrieval (2402.18899v1)

Published 29 Feb 2024 in cs.IR

Abstract: This paper addresses the gap between general-purpose text embeddings and the specific demands of item retrieval tasks. We demonstrate the shortcomings of existing models in capturing the nuances necessary for zero-shot performance on item retrieval tasks. To overcome these limitations, we propose generate in-domain dataset from ten tasks tailored to unlocking models' representation ability for item retrieval. Our empirical studies demonstrate that fine-tuning embedding models on the dataset leads to remarkable improvements in a variety of retrieval tasks. We also illustrate the practical application of our refined model in a conversational setting, where it enhances the capabilities of LLM-based Recommender Agents like Chat-Rec. Our code is available at https://github.com/microsoft/RecAI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019.
  2. SimCSE: Simple Contrastive Learning of Sentence Embeddings. CoRR abs/2104.08821 (2021). arXiv:2104.08821 https://arxiv.org/abs/2104.08821
  3. Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System. CoRR abs/2303.14524 (2023). https://doi.org/10.48550/ARXIV.2303.14524 arXiv:2303.14524
  4. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505 (2023).
  5. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 (2023).
  6. Fine-tuning llama for multi-stage text retrieval. arXiv preprint arXiv:2310.08319 (2023).
  7. Text and code embeddings by contrastive pre-training. arXiv preprint arXiv:2201.10005 (2022).
  8. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 (2022).
  9. RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder. In EMNLP 2022, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics, 538–548.
  10. C-Pack: Packaged Resources To Advance General Chinese Embedding. arXiv:2309.07597 [cs.CL]
Citations (1)

Summary

  • The paper introduces a specialized fine-tuning methodology that significantly improves text-based item retrieval by adapting LLM embeddings to diverse query tasks.
  • It employs ten distinct fine-tuning tasks on gaming datasets, yielding notable gains in retrieval metrics like Hit@5 and Coverage@5.
  • The study validates the approach in a recommender AI agent setting, underscoring LLM adaptability in real-world application scenarios.

Enhancing Text-based Item Retrieval with Specialized Fine-Tuning Tasks

Introduction

Text-based item retrieval, a fundamental operation within recommender systems and search engines, hinges on the effective matching of user queries to relevant items via text embeddings. Despite the remarkable strides in LLMs, a noticeable gap persists between the broad capabilities of general-purpose text embeddings and the specific requirements of item retrieval scenarios. Our examination highlights the limitations of existing approaches and introduces a methodology for augmenting the performance of LLMs through specialized fine-tuning tasks tailored to the nuanced demands of item retrieval.

Methodology Overview

Problem Formulation

The challenge lies in developing an embedding model that accurately maps diverse textual queries to pertinent items within a given dataset, spanning simple keywords to detailed descriptions. The diversity of query formats necessitates a model capable of understanding and representing varied textual inputs against the item database.

Fine-Tuning Task Collection

To address the shortcomings of general-purpose embedding models, we propose a set of ten distinct tasks designed to fine-tune these models for enhanced item retrieval performance. These tasks encompass scenarios from implicit user preferences (e.g., based on user history or item similarities) to explicit attributes and fuzzy queries (e.g., vague conditions or negative attributes). By addressing a wide range of query contexts, the fine-tuning tasks aim to significantly elevate the model's aptitude in matching queries to relevant items.

Data Generation and Experimental Setup

Using two gaming datasets (Xbox and Steam), we generate a specialized dataset for fine-tuning, informed by an array of unique prompt templates. The introduction of true negatives, as opposed to the commonly used in-batch negatives, plays a pivotal role in ensuring model stability and efficacy across task variations. The comprehensive dataset allows for the nuanced training of the model to better navigate the intricacies of item retrieval.

Experimentation and Findings

Overall Performance Enhancements

Empirical results underscore the efficacy of in-domain fine-tuning, demonstrating substantial improvements across a gamut of retrieval tasks. For instance, the fine-tuned versions of models like E5 and BGE-v1.5 significantly outperformed their original counterparts, showcasing marked qualitative leaps in task-specific metrics such as Hit@5 and Coverage@5. The consistent performance uplift across tasks corroborates the robustness and versatility of the fine-tuned models.

Model Comparisons and Adaptability

The experimentation further reveals that models incorporating extensive contrastive learning phases during pretraining, such as E5 and BGE-v1.5, excel over models like BERT and RepLLaMA in the field of item retrieval. Out-Of-Domain (OOD) testing illustrates the models' varying degrees of adaptability, with specific tasks evidencing more universal applicability and others remaining domain-specific.

Practical Application: Recommender AI Agents

The refined model's real-world viability is evidenced in its deployment within a conversational Recommender AI Agent setting. Here, the model’s capacity to process raw conversational texts and extract relevant item recommendations underlines its practical utility, potentially advancing the capabilities of human-like interactive recommender systems.

Future Outlook and Theoretical Implications

This paper lays groundwork for further exploration into fine-tuning techniques tailored for specific operational contexts within the broader landscape of LLM applications. By showcasing the potential of task-specific dataset creation and fine-tuning for item retrieval, it opens avenues for research into more advanced, efficient search and recommender systems. The demonstrated approach not only elevates the practical effectiveness of such systems but also contributes to the theoretical understanding of embedding model adaptability and specialization.

Conclusion

Aligning LLMs through specialized fine-tuning tasks represents a promising approach to bridging the gap between general-purpose text embeddings and the specialized demands of text-based item retrieval. The empirical evidence supports the viability of this methodology, heralding an era of more nuanced and effective search and recommender systems, with broader implications for the field of generative AI and LLMs.

X Twitter Logo Streamline Icon: https://streamlinehq.com