Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation (2504.18857v1)

Published 26 Apr 2025 in cs.CL and cs.AI

Abstract: LLMs often struggle to process and generate coherent context when the number of input tokens exceeds the pre-trained length. Recent advancements in long-context extension have significantly expanded the context window of LLMs but require expensive overhead to train the large-scale models with longer context. In this work, we propose Dimension-Wise Positional Embeddings Manipulation (DPE), a training-free framework to extrapolate the context window of LLMs by diving into RoPE's different hidden dimensions. Instead of manipulating all dimensions equally, DPE detects the effective length for every dimension and finds the key dimensions for context extension. We reuse the original position indices with their embeddings from the pre-trained model and manipulate the key dimensions' position indices to their most effective lengths. In this way, DPE adjusts the pre-trained models with minimal modifications while ensuring that each dimension reaches its optimal state for extrapolation. DPE significantly surpasses well-known baselines such as YaRN and Self-Extend. DPE enables Llama3-8k 8B to support context windows of 128k tokens without continual training and integrates seamlessly with Flash Attention 2. In addition to its impressive extrapolation capability, DPE also dramatically improves the models' performance within training length, such as Llama3.1 70B, by over 18 points on popular long-context benchmarks RULER. When compared with commercial models, Llama 3.1 70B with DPE even achieves better performance than GPT-4-128K.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Yi Lu (145 papers)
  2. Wanxu Zhao (3 papers)
  3. Xin Zhou (319 papers)
  4. Chenxin An (17 papers)
  5. Chenglong Wang (80 papers)
  6. Shuo Li (179 papers)
  7. Yuming Yang (14 papers)
  8. Jun Zhao (469 papers)
  9. Tao Ji (28 papers)
  10. Tao Gui (127 papers)
  11. Qi Zhang (784 papers)
  12. Xuanjing Huang (287 papers)