Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Positional Encodings Boost Length Generalization of Transformers (2305.16843v1)

Published 26 May 2023 in cs.LG, cs.AI, cs.CL, and stat.ML

Abstract: Transformers have impressive generalization capabilities on tasks with a fixed context length. However, they fail to generalize to sequences of arbitrary length, even for seemingly simple tasks such as duplicating a string. Moreover, simply training on longer sequences is inefficient due to the quadratic computation complexity of the global attention mechanism. In this work, we demonstrate that this failure mode is linked to positional encodings being out-of-distribution for longer sequences (even for relative encodings) and introduce a novel family of positional encodings that can overcome this problem. Concretely, our randomized positional encoding scheme simulates the positions of longer sequences and randomly selects an ordered subset to fit the sequence's length. Our large-scale empirical evaluation of 6000 models across 15 algorithmic reasoning tasks shows that our method allows Transformers to generalize to sequences of unseen length (increasing test accuracy by 12.0% on average).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Anian Ruoss (20 papers)
  2. Tim Genewein (25 papers)
  3. Jordi Grau-Moya (25 papers)
  4. Róbert Csordás (25 papers)
  5. Mehdi Bennani (3 papers)
  6. Shane Legg (47 papers)
  7. Joel Veness (29 papers)
  8. Grégoire Delétang (12 papers)
Citations (84)
X Twitter Logo Streamline Icon: https://streamlinehq.com