Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Document Representations by Generating Pseudo Query Embeddings for Dense Retrieval (2105.03599v2)

Published 8 May 2021 in cs.IR and cs.CL

Abstract: Recently, the retrieval models based on dense representations have been gradually applied in the first stage of the document retrieval tasks, showing better performance than traditional sparse vector space models. To obtain high efficiency, the basic structure of these models is Bi-encoder in most cases. However, this simple structure may cause serious information loss during the encoding of documents since the queries are agnostic. To address this problem, we design a method to mimic the queries on each of the documents by an iterative clustering process and represent the documents by multiple pseudo queries (i.e., the cluster centroids). To boost the retrieval process using approximate nearest neighbor search library, we also optimize the matching function with a two-step score calculation procedure. Experimental results on several popular ranking and QA datasets show that our model can achieve state-of-the-art results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hongyin Tang (9 papers)
  2. Xingwu Sun (32 papers)
  3. Beihong Jin (15 papers)
  4. Jingang Wang (71 papers)
  5. Fuzheng Zhang (60 papers)
  6. Wei Wu (482 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.