Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformers are Multi-State RNNs (2401.06104v2)

Published 11 Jan 2024 in cs.CL

Abstract: Transformers are considered conceptually different from the previous generation of state-of-the-art NLP models - recurrent neural networks (RNNs). In this work, we demonstrate that decoder-only transformers can in fact be conceptualized as unbounded multi-state RNNs - an RNN variant with unlimited hidden state size. We further show that transformers can be converted into $\textit{bounded}$ multi-state RNNs by fixing the size of their hidden state, effectively compressing their key-value cache. We introduce a novel, training-free compression policy - $\textbf{T}$oken $\textbf{O}$mission $\textbf{V}$ia $\textbf{A}$ttention (TOVA). Our experiments with four long range tasks and several LLMs show that TOVA outperforms several baseline compression policies. Particularly, our results are nearly on par with the full model, using in some cases only $\frac{1}{8}$ of the original cache size, which translates to 4.8X higher throughput. Our results shed light on the connection between transformers and RNNs, and help mitigate one of LLMs' most painful computational bottlenecks - the size of their key-value cache. We publicly release our code at https://github.com/schwartz-lab-NLP/TOVA

Overview of Transformers and RNNs

Transformers have become a staple in NLP, largely due to their ability to efficiently handle sequential data. Their architecture differs significantly from the previously dominant Recurrent Neural Networks (RNNs), which process sequences by maintaining a state of previous inputs. However, a new paper puts forward an intriguing perspective that encoder-decoder transformers deeply resemble a particular kind of RNN, known as infinite multi-state RNNs (MSRNNs).

New Insights into Transformer Architecture

The paper posits that decoder-only transformers, which generate output auto-regressively, align with the core principle of RNNs by preserving a state from one step to the next. What sets transformers apart is that they can be seen as MSRNNs with an unlimited number of states. This perspective allows for the transformer's hidden state size to be fixed, transforming them into finite MSRNNs. This reframing of transformer architecture connects it with established compression techniques already present in the field, and opens the door for new, more efficient policy developments.

Introducing TOVA for Transformer Compression

One such policy, called Token Omission Via Attention (TOVA), simplifies existing policies by using attention scores to determine which tokens to retain in the state. The research showcases TOVA's effectiveness across several long-range tasks, where it performs comparably to transformers with full (infinite) cache, despite using a fraction of the original cache memory. This establishes TOVA as an efficient and potent method for converting transformers into finite MSRNNs, potentially reducing computational costs with minimal impact on performance.

Practical Implications and Benefits

The findings of this paper have considerable practical implications. With TOVA, the memory consumption during inference for LLMs was reduced by up to 88%, which could significantly increase batch sizes and improve hardware utilization. While transformers were traditionally seen as distinct from RNNs, this paper bridges the two, revealing that in practice, transformer decoder LLMs often function as finite MSRNNs. With this new understanding, developers and researchers in AI could optimize transformer models, making them more accessible and efficient.

The paper concludes by emphasizing that while transformers are conceptualized as having an infinite multi-state capacity, they often behave like RNNs with a limited capacity, paving the way for further optimization and analysis in how these models process and retain information across long sequences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. 01-ai. 2023. 01-ai/yi-6b. https://github.com/01-ai/Yi.
  2. Dynamic context pruning for efficient and interpretable autoregressive transformers. In Thirty-seventh Conference on Neural Information Processing Systems.
  3. Unitary evolution recurrent neural networks. In International conference on machine learning, pages 1120–1128. PMLR.
  4. Longformer: The long-document transformer. arXiv:2004.05150.
  5. Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc.
  6. Extending context window of large language models via positional interpolation. arXiv:2306.15595.
  7. Vicuna: An open-source chatbot impressing GPT-4 with 90%* ChatGPT quality.
  8. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics.
  9. A dataset of information-seeking questions and answers anchored in research papers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4599–4610, Online. Association for Computational Linguistics.
  10. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211.
  11. Model tells you what to discard: Adaptive KV cache compression for LLMs. In Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023).
  12. Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv:2312.00752.
  13. Efficiently modeling long sequences with structured state spaces. arXiv:2111.00396.
  14. Diagonal state spaces are as effective as structured state spaces. Advances in Neural Information Processing Systems, 35:22982–22994.
  15. LM-Infinite: Simple on-the-fly length generalization for large language models. arXiv:2308.16137.
  16. How much does attention actually attend? questioning the importance of attention in pretrained transformers. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1403–1416, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  17. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
  18. Block-recurrent transformers. In Advances in Neural Information Processing Systems.
  19. Mistral 7b. arXiv:2310.06825.
  20. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning, pages 5156–5165. PMLR.
  21. FNet: Mixing tokens with Fourier transforms. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4296–4313, Seattle, United States. Association for Computational Linguistics.
  22. Differentiable subset pruning of transformer heads. Transactions of the Association for Computational Linguistics, 9:1442–1459.
  23. Pay attention to MLPs. In Advances in Neural Information Processing Systems, volume 34, pages 9204–9215. Curran Associates, Inc.
  24. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4969–4983, Online. Association for Computational Linguistics.
  25. Long range language modeling via gated state spaces. In The Eleventh International Conference on Learning Representations.
  26. Stephen Merity. 2019. Single headed attention RNN: Stop thinking with your head. arXiv:1911.11423.
  27. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
  28. Resurrecting recurrent neural networks for long sequences. arXiv:2303.06349.
  29. RWKV: Reinventing RNNs for the transformer era. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14048–14077, Singapore. Association for Computational Linguistics.
  30. ABC: Attention with bounded-memory control. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7469–7483, Dublin, Ireland. Association for Computational Linguistics.
  31. Efficiently scaling transformer inference. arXiv:2211.05102.
  32. Train short, test long: Attention with linear biases enables input length extrapolation. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
  33. Language models are unsupervised multitask learners.
  34. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations.
  35. ZeroSCROLLS: A zero-shot benchmark for long text understanding. arXiv:2305.14196.
  36. Primer: Searching for efficient transformers for language modeling. arXiv:2109.08668.
  37. Retentive network: A successor to transformer for large language models. arXiv:2307.08621.
  38. LLaMA: Open and efficient foundation language models. arXiv:2302.13971.
  39. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288.
  40. Attention is all you need. Advances in neural information processing systems, 30.
  41. SQuALITY: Building a long-document summarization dataset the hard way. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1139–1156, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  42. Large language models are not fair evaluators. arXiv:2305.17926.
  43. Linformer: Self-attention with linear complexity. arXiv:2006.04768.
  44. Efficient streaming language models with attention sinks. arXiv:2309.17453.
  45. Gated linear attention transformers with hardware-efficient training. arXiv:2312.06635.
  46. Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283–17297.
  47. H22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPTo: Heavy-hitter oracle for efficient generative inference of large language models. arXiv:2306.14048.
  48. LIMA: Less is more for alignment. arXiv:2305.11206.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Matanel Oren (3 papers)
  2. Michael Hassid (12 papers)
  3. Yossi Adi (96 papers)
  4. Roy Schwartz (74 papers)
  5. Nir Yarden (1 paper)
Citations (20)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

  1. Transformers Are Multi-State RNNs (41 points, 9 comments)