Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Retrieval Augmentation and the Limitations of Language Model Training (2311.09615v2)

Published 16 Nov 2023 in cs.CL

Abstract: Augmenting a LLM (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility -- the "softmax bottleneck." We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, $k$NN retrieval augmentation consistently improves performance in this setting. Finally, to make $k$NN retrieval more accessible, we propose using a multi-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costs by over 25x.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ting-Rui Chiang (16 papers)
  2. Xinyan Velocity Yu (10 papers)
  3. Joshua Robinson (35 papers)
  4. Ollie Liu (14 papers)
  5. Isabelle Lee (6 papers)
  6. Dani Yogatama (49 papers)