Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Generalization Ability of Retrieval-Enhanced Transformers (2302.12128v1)

Published 23 Feb 2023 in cs.CL

Abstract: Recent work on the Retrieval-Enhanced Transformer (RETRO) model has shown that off-loading memory from trainable weights to a retrieval database can significantly improve LLMing and match the performance of non-retrieval models that are an order of magnitude larger in size. It has been suggested that at least some of this performance gain is due to non-trivial generalization based on both model weights and retrieval. In this paper, we try to better understand the relative contributions of these two components. We find that the performance gains from retrieval largely originate from overlapping tokens between the database and the test data, suggesting less non-trivial generalization than previously assumed. More generally, our results point to the challenges of evaluating the generalization of retrieval-augmented LLMs such as RETRO, as even limited token overlap may significantly decrease test-time loss. We release our code and model at https://github.com/TobiasNorlund/retro

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tobias Norlund (6 papers)
  2. Ehsan Doostmohammadi (11 papers)
  3. Richard Johansson (18 papers)
  4. Marco Kuhlmann (13 papers)
Citations (5)
Github Logo Streamline Icon: https://streamlinehq.com