Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation (2311.00684v2)

Published 1 Nov 2023 in cs.CL and cs.LG

Abstract: An ideal length-extrapolatable Transformer LLM can handle sequences longer than the training length without any fine-tuning. Such long-context utilization capability relies heavily on a flexible positional embedding design. Upon investigating the flexibility of existing large pre-trained Transformer LLMs, we find that the T5 family deserves a closer look, as its positional embeddings capture rich and flexible attention patterns. However, T5 suffers from the dispersed attention issue: the longer the input sequence, the flatter the attention distribution. To alleviate the issue, we propose two attention alignment strategies via temperature scaling. Our findings show improvement on the long-context utilization capability of T5 on LLMing, retrieval, multi-document question answering, and code completion tasks without any fine-tuning. This suggests that a flexible positional embedding design and attention alignment can go a long way toward Transformer length extrapolation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ta-Chung Chi (19 papers)
  2. Ting-Han Fan (15 papers)
  3. Alexander I. Rudnicky (9 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com