Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models (2409.11136v1)

Published 17 Sep 2024 in cs.IR, cs.CL, and cs.LG

Abstract: Instruction-tuned LLMs (LM) are able to respond to imperative commands, providing a more natural user interface compared to their base counterparts. In this work, we present Promptriever, the first retrieval model able to be prompted like an LM. To train Promptriever, we curate and release a new instance-level instruction training set from MS MARCO, spanning nearly 500k instances. Promptriever not only achieves strong performance on standard retrieval tasks, but also follows instructions. We observe: (1) large gains (reaching SoTA) on following detailed relevance instructions (+14.3 p-MRR / +3.1 nDCG on FollowIR), (2) significantly increased robustness to lexical choices/phrasing in the query+instruction (+12.9 Robustness@10 on InstructIR), and (3) the ability to perform hyperparameter search via prompting to reliably improve retrieval performance (+1.4 average increase on BEIR). Promptriever demonstrates that retrieval models can be controlled with prompts on a per-query basis, setting the stage for future work aligning LM prompting techniques with information retrieval.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Orion Weller (31 papers)
  2. Benjamin Van Durme (173 papers)
  3. Dawn Lawrie (31 papers)
  4. Ashwin Paranjape (12 papers)
  5. Yuhao Zhang (107 papers)
  6. Jack Hessel (50 papers)
Citations (4)

Summary

An Analysis of "Promptriever: Instruction-Trained Retrievers Can Be Prompted Like LLMs"

The paper "Promptriever: Instruction-Trained Retrievers Can Be Prompted Like LLMs" introduces an innovative approach to information retrieval (IR) by proposing a retrieval model named Promptriever. The primary contribution of this work is the demonstration that retrieval models can be as promptable and instruction-responsive as LLMs (LMs) by training them with a specialized, instruction-based dataset.

Technical Overview

Promptriever builds upon the foundation of instruction-tuned LLMs and integrates these capabilities into a bi-encoder retriever framework. The backbone of Promptriever includes models like LLaMA-2 7B, although the paper explores variations with different backbone LMs such as Mistral v1 and Llama 3.1 Instruct, ensuring the robustness of the approach across model architectures.

The authors leverage the MS MARCO dataset, augmenting it with a meticulously crafted instruction-based dataset comprising nearly 500k query-passage pairs, tailored with specific instructions. These instructions range from simple imperatives to complex relevancy criteria, adding diversity and robustness to the training data. The dataset includes "instruction negatives," where the relevance of passages is contextual and dependent on the given instructions, ensuring models learn to adapt dynamically to instructions.

Experimental Results

The effectiveness of Promptriever is evaluated on several fronts:

  1. Instruction Following: The model demonstrates superior performance in instruction-following tasks compared to existing dense retrievers like RepLLaMA. Specifically, Promptriever achieves +14.3 points in p-MRR and +3.1 in nDCG/MAP on datasets like FollowIR and InstructIR, indicating its proficiency in adhering to detailed relevance instructions and robustness against varied lexical choices in queries.
  2. Standard Retrieval Tasks: On conventional retrieval benchmarks such as MS MARCO and BEIR, Promptriever maintains competitive performance, showcasing its versatility. For instance, on the in-domain MS MARCO dataset, Promptriever performs comparably to RepLLaMA, while on out-of-domain BEIR tasks, the model shows a potential for improvement when prompted correctly, bringing an average nDCG@10 gain of +1.4.
  3. Prompt Sensitivity and Robustness: Promptriever's ability to enhance retrieval performance via zero-shot prompt instructions is notable. The model's sensitivity to prompts, expressible through significant performance improvements and reduced variance in nDCG@10 scores, illustrates its potential for practical application through tailored, natural language prompts.

Implications and Future Directions

The implications of Promptriever's design are profound for both theoretical and practical aspects of IR. By aligning instruction-following capabilities of LMs with IR models, Promptriever opens doors for more user-friendly, adaptive search experiences where users can employ natural language instructions to refine and direct the search process dynamically. This adaptability places it ahead of traditional IR models that rely on static semantic similarity scores, and it is particularly promising for use cases requiring nuanced and context-sensitive information retrieval.

The approach also suggests potential pathways for further research:

  • Enhanced Prompt Engineering: Future work may explore more sophisticated prompt engineering techniques, perhaps incorporating few-shot learning or in-context learning methodologies to further boost the model's adaptive capabilities.
  • Dataset Expansion: Extending the instruction-based dataset to include a broader range of domains and instructions could generalize the model's applicability even further.
  • Cross-disciplinary Applications: The versatility of promptable retrievers like Promptriever can be explored in cross-disciplinary applications, including legal document retrieval, personalized recommendation systems, and interactive AI-driven educational tools.

In conclusion, the Promptriever model represents a significant step towards integrating the robustness and adaptability of instruction-following LMs into the IR domain, enhancing both the effectiveness and user experience of search systems. The paper provides a comprehensive evaluation and validation of this approach, setting a solid groundwork for future advancements in the field of information retrieval.