Overview of Transformers and RNNs
Transformers have become a staple in NLP, largely due to their ability to efficiently handle sequential data. Their architecture differs significantly from the previously dominant Recurrent Neural Networks (RNNs), which process sequences by maintaining a state of previous inputs. However, a new paper puts forward an intriguing perspective that encoder-decoder transformers deeply resemble a particular kind of RNN, known as infinite multi-state RNNs (MSRNNs).
New Insights into Transformer Architecture
The paper posits that decoder-only transformers, which generate output auto-regressively, align with the core principle of RNNs by preserving a state from one step to the next. What sets transformers apart is that they can be seen as MSRNNs with an unlimited number of states. This perspective allows for the transformer's hidden state size to be fixed, transforming them into finite MSRNNs. This reframing of transformer architecture connects it with established compression techniques already present in the field, and opens the door for new, more efficient policy developments.
Introducing TOVA for Transformer Compression
One such policy, called Token Omission Via Attention (TOVA), simplifies existing policies by using attention scores to determine which tokens to retain in the state. The research showcases TOVA's effectiveness across several long-range tasks, where it performs comparably to transformers with full (infinite) cache, despite using a fraction of the original cache memory. This establishes TOVA as an efficient and potent method for converting transformers into finite MSRNNs, potentially reducing computational costs with minimal impact on performance.
Practical Implications and Benefits
The findings of this paper have considerable practical implications. With TOVA, the memory consumption during inference for LLMs was reduced by up to 88%, which could significantly increase batch sizes and improve hardware utilization. While transformers were traditionally seen as distinct from RNNs, this paper bridges the two, revealing that in practice, transformer decoder LLMs often function as finite MSRNNs. With this new understanding, developers and researchers in AI could optimize transformer models, making them more accessible and efficient.
The paper concludes by emphasizing that while transformers are conceptualized as having an infinite multi-state capacity, they often behave like RNNs with a limited capacity, paving the way for further optimization and analysis in how these models process and retain information across long sequences.