Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UnifieR: A Unified Retriever for Large-Scale Retrieval (2205.11194v2)

Published 23 May 2022 in cs.IR and cs.CL

Abstract: Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relies on representation learning to embed documents and queries into a common semantic encoding space. According to the encoding space, recent retrieval methods based on pre-trained LLMs (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms. These two paradigms unveil the PLMs' representation capability in different granularities, i.e., global sequence-level compression and local word-level contexts, respectively. Inspired by their complementary global-local contextualization and distinct representing views, we propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability. Experiments on passage retrieval benchmarks verify its effectiveness in both paradigms. A uni-retrieval scheme is further presented with even better retrieval quality. We lastly evaluate the model on BEIR benchmark to verify its transferability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tao Shen (87 papers)
  2. Xiubo Geng (36 papers)
  3. Chongyang Tao (61 papers)
  4. Can Xu (98 papers)
  5. Guodong Long (115 papers)
  6. Kai Zhang (542 papers)
  7. Daxin Jiang (138 papers)
Citations (24)