Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving language models by retrieving from trillions of tokens (2112.04426v3)

Published 8 Dec 2021 in cs.CL and cs.LG
Improving language models by retrieving from trillions of tokens

Abstract: We enhance auto-regressive LLMs by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a $2$ trillion token database, our Retrieval-Enhanced Transformer (RETRO) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25$\times$ fewer parameters. After fine-tuning, RETRO performance translates to downstream knowledge-intensive tasks such as question answering. RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train RETRO from scratch, yet can also rapidly RETROfit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving LLMs through explicit memory at unprecedented scale.

An Overview of RETRO: Enhancing LLMs with Retrieval-Augmented Transformer Blocks

The focus of this paper is on the innovative approach to augmenting large-scale LLMs, particularly transforming their architecture using retrieval-augmented transformer (RETRO) blocks. The paper provides a thorough examination of RETRO's impact on model performance, parameter efficiency, and introduces new methodologies for integrating retrieval mechanisms within transformer models. Detailed numerical evaluations across multiple datasets substantiate their findings.

RETRO Architecture and Methodology

The RETRO model diverges from conventional transformer-based models by incorporating retrieval mechanisms to access relevant document chunks during the forward pass. This process is driven by the following integral components:

  1. Frozen kNN Retriever: A pre-trained retrieval model retrieves the nearest neighbor document chunks relevant to the input tokens without fine-tuning during the training of the LLM.
  2. Chunked Cross-Attention (CCA): This mechanism allows the model to attend to the encoded retrieved neighbors. Specifically, by deploying chunked cross-attention, the model can harness context from the retrieved information effectively.
  3. RETRO Blocks: These blocks, integrated within the transformer layers, combine inputs with retrieved contexts, subsequently processed through feed-forward networks. This design ensures the model scales effectively with the size of the retrieval database.

Empirical Performance Analysis

The evaluation of RETRO spans multiple datasets, including Wikipedia, OpenWebText, and more domain-specific datasets like arXiv and PubMed abstracts.

Key Numerical Results:

  • LAMBADA Accuracy: Consistently high accuracy was observed across different model sizes (172M, 425M, 1.5B, 7.5B parameters), indicating effective context retrieval mechanisms.
  • Perplexity Metrics: There was notable improvement in perplexity scores on various corpora such as Wikitext103:
    • 0.70 vs 0.50 (172M RETRO [ON] vs Baseline)
    • 0.65 vs 0.60 (1.5B RETRO [ON] vs Baseline)
  • Bits-Per-Byte (bpb) Reduction: Significant bpb reduction was noted when implementing RETRO on large datasets, highlighting the compression efficiency and reduced redundancy:
    • Relatively better bpb on Wikipedia September 2021 dataset with larger parameter models, from 0.60 to 0.85 depending on retrieval parameters.

Implications and Future Work

Theoretical Implications:

The RETRO model’s architecture demonstrates that retrieval-augmented approaches can mitigate some scaling limitations faced by traditional transformers. The chunked cross-attention mechanism adds a layer of dynamic context integration which could pave the way for more adaptive LLMs.

Practical Implications:

On a practical level, integrating RETRO blocks could improve real-world applications such as conversational agents, question-answering systems, and text summarization tools. This enhancement is particularly relevant for domains requiring access to large, dynamic knowledge bases.

Future Developments:

Future research could investigate optimizing the retrieval mechanisms further, focusing on faster kNN retrieval processes and refining the chunk selection strategies. Additionally, exploring RETRO's application in multitask learning scenarios and its potential in low-resource languages provides promising directions for the continued evolution of LLMs.

In conclusion, the paper positions RETRO as a formidable enhancement over traditional transformer models by effectively integrating retrieval mechanisms, demonstrating substantial improvements in model performance and parameter efficiency. The exploration into retrieval-augmented architectures such as RETRO holds substantial promise for future advancements in the field of natural language processing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (28)
  1. Sebastian Borgeaud (19 papers)
  2. Arthur Mensch (26 papers)
  3. Jordan Hoffmann (14 papers)
  4. Trevor Cai (6 papers)
  5. Eliza Rutherford (7 papers)
  6. Katie Millican (9 papers)
  7. George van den Driessche (7 papers)
  8. Jean-Baptiste Lespiau (17 papers)
  9. Bogdan Damoc (6 papers)
  10. Aidan Clark (13 papers)
  11. Aurelia Guy (8 papers)
  12. Jacob Menick (13 papers)
  13. Roman Ring (7 papers)
  14. Tom Hennigan (8 papers)
  15. Saffron Huang (10 papers)
  16. Loren Maggiore (3 papers)
  17. Chris Jones (35 papers)
  18. Albin Cassirer (10 papers)
  19. Andy Brock (5 papers)
  20. Michela Paganini (27 papers)
Citations (865)
Youtube Logo Streamline Icon: https://streamlinehq.com