Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers (2110.02402v1)

Published 5 Oct 2021 in cs.LG and cs.CL

Abstract: Recent studies have demonstrated that the performance of transformers on the task of LLMing obeys a power-law relationship with model size over six orders of magnitude. While transformers exhibit impressive scaling, their performance hinges on processing large amounts of data, and their computational and memory requirements grow quadratically with sequence length. Motivated by these considerations, we construct a Legendre Memory Unit based model that introduces a general prior for sequence processing and exhibits an $O(n)$ and $O(n \ln n)$ (or better) dependency for memory and computation respectively. Over three orders of magnitude, we show that our new architecture attains the same accuracy as transformers with 10x fewer tokens. We also show that for the same amount of training our model improves the loss over transformers about as much as transformers improve over LSTMs. Additionally, we demonstrate that adding global self-attention complements our architecture and the augmented model improves performance even further.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Narsimha Chilkuri (3 papers)
  2. Eric Hunsberger (5 papers)
  3. Aaron Voelker (1 paper)
  4. Gurshaant Malik (3 papers)
  5. Chris Eliasmith (16 papers)
Citations (7)