Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unbounded cache model for online language modeling with open vocabulary (1711.02604v1)

Published 7 Nov 2017 in cs.LG and cs.CL

Abstract: Recently, continuous cache models were proposed as extensions to recurrent neural network LLMs, to adapt their predictions to local changes in the data distribution. These models only capture the local context, of up to a few thousands tokens. In this paper, we propose an extension of continuous cache models, which can scale to larger contexts. In particular, we use a large scale non-parametric memory component that stores all the hidden activations seen in the past. We leverage recent advances in approximate nearest neighbor search and quantization algorithms to store millions of representations while searching them efficiently. We conduct extensive experiments showing that our approach significantly improves the perplexity of pre-trained LLMs on new distributions, and can scale efficiently to much larger contexts than previously proposed local cache models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Edouard Grave (56 papers)
  2. Moustapha Cisse (14 papers)
  3. Armand Joulin (81 papers)
Citations (59)