Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterizing Verbatim Short-Term Memory in Neural Language Models (2210.13569v2)

Published 24 Oct 2022 in cs.CL

Abstract: When a LLM is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can LLMs retrieve? We tested whether LLMs could retrieve the exact words that occurred previously in a text. In our paradigm, LLMs (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM's retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kristijan Armeni (2 papers)
  2. Christopher Honey (2 papers)
  3. Tal Linzen (73 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.