Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E^2-LLM: Efficient and Extreme Length Extension of Large Language Models (2401.06951v3)

Published 13 Jan 2024 in cs.CL and cs.AI
E^2-LLM: Efficient and Extreme Length Extension of Large Language Models

Abstract: Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. Existing long-context extension methods usually need additional training procedures to support corresponding long-context windows, where the long-context training data (e.g., 32k) is needed, and high GPU training costs are assumed. To address the aforementioned issues, we propose an Efficient and Extreme length extension method for LLMs, called E 2 -LLM, with only one training procedure and dramatically reduced computation cost, which also removes the need to collect long-context data. Concretely, first, the training data of our E 2 -LLM only requires a short length (e.g., 4k), which reduces the tuning cost greatly. Second, the training procedure on the short training context window is performed only once time, and we can support different evaluation context windows at inference. Third, in E 2 - LLM, based on RoPE position embeddings, we introduce two different augmentation methods on the scale and position index parameters for different samples in training. It aims to make the model more robust to the different relative differences when directly interpolating the arbitrary context length at inference. Comprehensive experimental results on multiple benchmark datasets demonstrate the effectiveness of our E 2 -LLM on challenging long-context tasks.

Introduction

LLMs have transformed the nature of tasks that AI systems can perform. However, they typically have a limitation in the context length they can handle, posing a challenge in applications such as document summarization, long conversations, and lengthy reasoning tasks. Most models have a preset limit on the number of tokens they can consider, and increasing this limit traditionally implies a massive computational burden and the need for fine-tuning on extensive datasets. This paper proposes E²-LLM, a method that streamlines the length extension of LLMs in an efficient manner, using shorter context lengths for training and supporting evaluation on longer inputs without additional fine-tuning or computational intensiveness.

Methodology

The basis of E²-LLM lies in its two-pronged augmentation strategy, which leverages the Rotary Position Embedding (RoPE) to extend the effective context length with minimal additional training. The first strategy varies the scale parameter of the position embeddings, effectively changing the facing of position indices so that the model learns to deal with varied densities of positions. Secondly, an augmentation on the position index parameters is introduced to allow for offsets, making the model more versatile to different positional ranges. This is crucial as it teaches the LLM to generalize across different lengths and relative differences, a capability that is activated during inference time depending on the given context window.

Experimental Findings

E²-LLM was put to the test on several benchmark datasets designed to challenge the model’s long-context abilities. It was found to perform effectively across these tasks, often matching or outperforming existing LLMs that had been trained extensively for longer context windows. Notably, E²-LLM managed to achieve these results with significantly lower GPU memory costs as it required only once-off training using shorter data sequences (e.g., 4k tokens), yet it was successful in handling much longer contexts (e.g., 32k tokens) effectively.

Implications and Future Work

The ingenuity of E²-LLM opens new doors for efficient utilization of powerful LLMs without the prohibitive costs associated with training them on long contexts. As future work, the authors intend to apply this methodology to even larger models and examine its performance on more varied datasets and tasks. Furthermore, they plan to explore the method's adaptability to other types of positional encodings and LLMs. As the computational landscape becomes more demanding, E²-LLM stands out as a promising approach to pushing the boundaries of what LLMs can do without breaking the computational bank.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. 2023. Ntk-aware scaled rope.
  2. Proof-pile.
  3. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508.
  4. Longformer: The long-document transformer. CoRR, abs/2004.05150.
  5. Recurrent memory transformer. In NeurIPS.
  6. Extending context window of large language models via positional interpolation. CoRR, abs/2306.15595.
  7. Longlora: Efficient fine-tuning of long-context large language models. arXiv:2309.12307.
  8. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621, New Orleans, Louisiana. Association for Computational Linguistics.
  9. Tri Dao. 2023. Flashattention-2: Faster attention with better parallelism and work partitioning. CoRR, abs/2307.08691.
  10. Longnet: Scaling transformers to 1, 000, 000, 000 tokens. CoRR, abs/2307.02486.
  11. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
  12. Lm-infinite: Simple on-the-fly length generalization for large language models. CoRR, abs/2308.16137.
  13. Few-shot learning with retrieval augmented language models. CoRR, abs/2208.03299.
  14. Dense passage retrieval for open-domain question answering. In EMNLP, pages 6769–6781.
  15. Reformer: The efficient transformer. In ICLR.
  16. How long can open-source llms truly promise on context length?
  17. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
  18. Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. CoRR, abs/2305.16300.
  19. OpenAI. 2022. Introducing chatgpt.
  20. Yarn: Efficient context window extension of large language models. CoRR, abs/2309.00071.
  21. Train short, test long: Attention with linear biases enables input length extrapolation. In ICLR.
  22. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In KDD, pages 3505–3506. ACM.
  23. Roformer: Enhanced transformer with rotary position embedding. CoRR, abs/2104.09864.
  24. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971.
  25. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
  26. Linformer: Self-attention with linear complexity. CoRR, abs/2006.04768.
  27. Memorizing transformers. In ICLR.
  28. Efficient streaming language models with attention sinks. arXiv.
  29. Big bird: Transformers for longer sequences. In NeurIPS.
  30. Judging llm-as-a-judge with mt-bench and chatbot arena.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jiaheng Liu (100 papers)
  2. Zhiqi Bai (5 papers)
  3. Yuanxing Zhang (30 papers)
  4. Chenchen Zhang (19 papers)
  5. Yu Zhang (1399 papers)
  6. Ge Zhang (170 papers)
  7. Jiakai Wang (33 papers)
  8. Haoran Que (10 papers)
  9. Yukang Chen (43 papers)
  10. Wenbo Su (36 papers)
  11. Tiezheng Ge (46 papers)
  12. Jie Fu (229 papers)
  13. Wenhu Chen (134 papers)
  14. Bo Zheng (205 papers)
Citations (8)