Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary (2212.10380v2)

Published 20 Dec 2022 in cs.CL and cs.IR

Abstract: Dual encoders are now the dominant architecture for dense retrieval. Yet, we have little understanding of how they represent text, and why this leads to good performance. In this work, we shed light on this question via distributions over the vocabulary. We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space. We show that the resulting projections contain rich semantic information, and draw connection between them and sparse retrieval. We find that this view can offer an explanation for some of the failure cases of dense retrievers. For example, we observe that the inability of models to handle tail entities is correlated with a tendency of the token distributions to forget some of the tokens of those entities. We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in zero-shot settings, and specifically on the BEIR benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ori Ram (14 papers)
  2. Liat Bezalel (4 papers)
  3. Adi Zicher (1 paper)
  4. Yonatan Belinkov (111 papers)
  5. Jonathan Berant (107 papers)
  6. Amir Globerson (87 papers)
Citations (31)