Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models (2405.17915v1)

Published 28 May 2024 in cs.CL

Abstract: Long-context modeling capabilities are important for LLMs in various applications. However, directly training LLMs with long context windows is insufficient to enhance this capability since some training samples do not exhibit strong semantic dependencies across long contexts. In this study, we propose a data mining framework \textbf{ProLong} that can assign each training sample with a long dependency score, which can be used to rank and filter samples that are more advantageous for enhancing long-context modeling abilities in LLM training. Specifically, we first use delta perplexity scores to measure the \textit{Dependency Strength} between text segments in a given document. Then we refine this metric based on the \textit{Dependency Distance} of these segments to incorporate spatial relationships across long-contexts. Final results are calibrated with a \textit{Dependency Specificity} metric to prevent trivial dependencies introduced by repetitive patterns. Moreover, a random sampling approach is proposed to optimize the computational efficiency of ProLong. Comprehensive experiments on multiple benchmarks indicate that ProLong effectively identifies documents that carry long dependencies and LLMs trained on these documents exhibit significantly enhanced long-context modeling capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Longze Chen (16 papers)
  2. Ziqiang Liu (16 papers)
  3. Wanwei He (10 papers)
  4. Yunshui Li (18 papers)
  5. Run Luo (22 papers)
  6. Min Yang (239 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets