Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Formal Framework for Understanding Length Generalization in Transformers (2410.02140v1)

Published 3 Oct 2024 in cs.LG

Abstract: A major challenge for transformers is generalizing to sequences longer than those observed during training. While previous works have empirically shown that transformers can either succeed or fail at length generalization depending on the task, theoretical understanding of this phenomenon remains limited. In this work, we introduce a rigorous theoretical framework to analyze length generalization in causal transformers with learnable absolute positional encodings. In particular, we characterize those functions that are identifiable in the limit from sufficiently long inputs with absolute positional encodings under an idealized inference scheme using a norm-based regularizer. This enables us to prove the possibility of length generalization for a rich family of problems. We experimentally validate the theory as a predictor of success and failure of length generalization across a range of algorithmic and formal language tasks. Our theory not only explains a broad set of empirical observations but also opens the way to provably predicting length generalization capabilities in transformers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xinting Huang (36 papers)
  2. Andy Yang (5 papers)
  3. Satwik Bhattamishra (13 papers)
  4. Yash Sarrof (3 papers)
  5. Andreas Krebs (24 papers)
  6. Hattie Zhou (10 papers)
  7. Preetum Nakkiran (43 papers)
  8. Michael Hahn (48 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com