Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bucket Pre-training is All You Need (2407.07495v1)

Published 10 Jul 2024 in cs.CL

Abstract: LLMs have demonstrated exceptional performance across various natural language processing tasks. However, the conventional fixed-length data composition strategy for pretraining, which involves concatenating and splitting documents, can introduce noise and limit the model's ability to capture long-range dependencies. To address this, we first introduce three metrics for evaluating data composition quality: padding ratio, truncation ratio, and concatenation ratio. We further propose a multi-bucket data composition method that moves beyond the fixed-length paradigm, offering a more flexible and efficient approach to pretraining. Extensive experiments demonstrate that our proposed method could significantly improving both the efficiency and efficacy of LLMs pretraining. Our approach not only reduces noise and preserves context but also accelerates training, making it a promising solution for LLMs pretraining.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hongtao Liu (44 papers)
  2. Qiyao Peng (19 papers)
  3. Qing Yang (138 papers)
  4. Kai Liu (391 papers)
  5. Hongyan Xu (9 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com