Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DAPE: Data-Adaptive Positional Encoding for Length Extrapolation (2405.14722v6)

Published 23 May 2024 in cs.CL

Abstract: Positional encoding plays a crucial role in transformers, significantly impacting model performance and length generalization. Prior research has introduced absolute positional encoding (APE) and relative positional encoding (RPE) to distinguish token positions in given sequences. However, both APE and RPE remain fixed after model training regardless of input data, limiting their adaptability and flexibility. Hence, we expect that the desired positional encoding should be data-adaptive and can be dynamically adjusted with the given attention. In this paper, we propose a Data-Adaptive Positional Encoding (DAPE) method, which dynamically and semantically adjusts based on input context and learned fixed priors. Experimental validation on real-world datasets (Arxiv, Books3, and CHE) demonstrates that DAPE enhances model performances in terms of trained length and length generalization, where the improvements are statistically significant. The model visualization suggests that our model can keep both local and anti-local information. Finally, we successfully train the model on sequence length 128 and achieve better performance at evaluation sequence length 8192, compared with other static positional encoding methods, revealing the benefit of the adaptive positional encoding method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Chuanyang Zheng (21 papers)
  2. Yihang Gao (13 papers)
  3. Han Shi (27 papers)
  4. Minbin Huang (8 papers)
  5. Jingyao Li (18 papers)
  6. Jing Xiong (30 papers)
  7. Xiaozhe Ren (21 papers)
  8. Michael Ng (11 papers)
  9. Xin Jiang (242 papers)
  10. Zhenguo Li (195 papers)
  11. Yu Li (378 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.