Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Semiparametric Language Models (2102.02557v1)

Published 4 Feb 2021 in cs.CL

Abstract: We present a LLM that combines a large parametric neural network (i.e., a transformer) with a non-parametric episodic memory component in an integrated architecture. Our model uses extended short-term context by caching local hidden states -- similar to transformer-XL -- and global long-term memory by retrieving a set of nearest neighbor tokens at each timestep. We design a gating function to adaptively combine multiple information sources to make a prediction. This mechanism allows the model to use either local context, short-term memory, or long-term memory (or any combination of them) on an ad hoc basis depending on the context. Experiments on word-based and character-based LLMing datasets demonstrate the efficacy of our proposed method compared to strong baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dani Yogatama (49 papers)
  2. Cyprien de Masson d'Autume (14 papers)
  3. Lingpeng Kong (134 papers)
Citations (95)