Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Memformer: A Memory-Augmented Transformer for Sequence Modeling (2010.06891v2)

Published 14 Oct 2020 in cs.CL

Abstract: Transformers have reached remarkable success in sequence modeling. However, these models have efficiency issues as they need to store all the history token-level representations as memory. We present Memformer, an efficient neural network for sequence modeling, that utilizes an external dynamic memory to encode and retrieve past information. Our model achieves linear time complexity and constant memory space complexity when processing long sequences. We also propose a new optimization scheme, memory replay back-propagation (MRBP), which promotes long-range back-propagation through time with a significantly reduced memory requirement. Experimental results show that Memformer has achieved comparable performance compared to the baselines by using 8.1x less memory space and 3.2x faster on inference. Analysis of the attention pattern shows that our external memory slots can encode and retain important information through timesteps.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qingyang Wu (29 papers)
  2. Zhenzhong Lan (56 papers)
  3. Kun Qian (87 papers)
  4. Jing Gu (29 papers)
  5. Alborz Geramifard (22 papers)
  6. Zhou Yu (206 papers)
Citations (42)