On Retrieval Augmentation and the Limitations of Language Model Training (2311.09615v2)
Abstract: Augmenting a LLM (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility -- the "softmax bottleneck." We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, $k$NN retrieval augmentation consistently improves performance in this setting. Finally, to make $k$NN retrieval more accessible, we propose using a multi-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costs by over 25x.
- Ting-Rui Chiang (16 papers)
- Xinyan Velocity Yu (10 papers)
- Joshua Robinson (35 papers)
- Ollie Liu (14 papers)
- Isabelle Lee (6 papers)
- Dani Yogatama (49 papers)