Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models are Strong Zero-Shot Retriever (2304.14233v2)

Published 27 Apr 2023 in cs.CL and cs.IR

Abstract: In this work, we propose a simple method that applies a LLM to large-scale retrieval in zero-shot scenarios. Our method, the Language LLM as Retriever (LameR), is built upon no other neural models but an LLM, while breaking brute-force combinations of retrievers with LLMs and lifting the performance of zero-shot retrieval to be very competitive on benchmark datasets. Essentially, we propose to augment a query with its potential answers by prompting LLMs with a composition of the query and the query's in-domain candidates. The candidates, regardless of correct or wrong, are obtained by a vanilla retrieval procedure on the target collection. As a part of the prompts, they are likely to help LLM generate more precise answers by pattern imitation or candidate summarization. Even if all the candidates are wrong, the prompts at least make LLM aware of in-collection patterns and genres. Moreover, due to the low performance of a self-supervised retriever, the LLM-based query augmentation becomes less effective as the retriever bottlenecks the whole pipeline. Therefore, we propose to leverage a non-parametric lexicon-based method (e.g., BM25) as the retrieval module to capture query-document overlap in a literal fashion. As such, LameR makes the retrieval procedure transparent to the LLM, thus circumventing the performance bottleneck.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tao Shen (87 papers)
  2. Guodong Long (115 papers)
  3. Xiubo Geng (36 papers)
  4. Chongyang Tao (61 papers)
  5. Tianyi Zhou (172 papers)
  6. Daxin Jiang (138 papers)
Citations (26)