Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models (2406.00605v1)

Published 2 Jun 2024 in cs.CL and cs.AI

Abstract: We introduce LongSkywork, a long-context LLM capable of processing up to 200,000 tokens. We provide a training recipe for efficiently extending context length of LLMs. We identify that the critical element in enhancing long-context processing capability is to incorporate a long-context SFT stage following the standard SFT stage. A mere 200 iterations can convert the standard SFT model into a long-context model. To reduce the effort in collecting and annotating data for long-context LLMing, we develop two novel methods for creating synthetic data. These methods are applied during the continual pretraining phase as well as the Supervised Fine-Tuning (SFT) phase, greatly enhancing the training efficiency of our long-context LLMs. Our findings suggest that synthetic long-context SFT data can surpass the performance of data curated by humans to some extent. LongSkywork achieves outstanding performance on a variety of long-context benchmarks. In the Needle test, a benchmark for long-context information retrieval, our models achieved perfect accuracy across multiple context spans. Moreover, in realistic application scenarios, LongSkywork-13B demonstrates performance on par with Claude2.1, the leading long-context model, underscoring the effectiveness of our proposed methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Liang Zhao (353 papers)
  2. Tianwen Wei (20 papers)
  3. Liang Zeng (31 papers)
  4. Cheng Cheng (188 papers)
  5. Liu Yang (195 papers)
  6. Peng Cheng (229 papers)
  7. Lijie Wang (23 papers)
  8. Chenxia Li (12 papers)
  9. Xuejie Wu (3 papers)
  10. Bo Zhu (83 papers)
  11. Yimeng Gan (1 paper)
  12. Rui Hu (96 papers)
  13. Shuicheng Yan (275 papers)
  14. Han Fang (61 papers)
  15. Yahui Zhou (18 papers)
Citations (5)