Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

You can't pick your neighbors, or can you? When and how to rely on retrieval in the $k$NN-LM (2210.15859v1)

Published 28 Oct 2022 in cs.CL and cs.LG

Abstract: Retrieval-enhanced LLMs (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the $k$NN-LM, interpolates any existing LM's predictions with the output of a $k$-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by $k$NN-LM. We find two trends: (1) the presence of large overlapping $n$-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the $k$NN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the $k$NN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English LLMing datasets, Wikitext-103 and PG-19. Our re-formulation of the $k$NN-LM is beneficial in both cases, and leads to nearly 4% improvement in perplexity on the Wikitext-103 test set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Andrew Drozdov (13 papers)
  2. Shufan Wang (17 papers)
  3. Razieh Rahimi (8 papers)
  4. Andrew McCallum (132 papers)
  5. Hamed Zamani (88 papers)
  6. Mohit Iyyer (87 papers)
Citations (15)
X Twitter Logo Streamline Icon: https://streamlinehq.com