Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 87 tok/s Pro
Kimi K2 204 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Efficient Pretraining Length Scaling (2504.14992v2)

Published 21 Apr 2025 in cs.CL

Abstract: Recent advances in LLMs have demonstrated the effectiveness of length scaling during post-training, yet its potential in pre-training remains underexplored. We present the Parallel Hidden Decoding Transformer (\textit{PHD}-Transformer), a novel framework that enables efficient length scaling during pre-training while maintaining inference efficiency. \textit{PHD}-Transformer achieves this through an innovative KV cache management strategy that distinguishes between original tokens and hidden decoding tokens. By retaining only the KV cache of original tokens for long-range dependencies while immediately discarding hidden decoding tokens after use, our approach maintains the same KV cache size as the vanilla transformer while enabling effective length scaling. To further enhance performance, we introduce two optimized variants: \textit{PHD-SWA} employs sliding window attention to preserve local dependencies, while \textit{PHD-CSWA} implements chunk-wise sliding window attention to eliminate linear growth in pre-filling time. Extensive experiments demonstrate consistent improvements across multiple benchmarks.

Summary

Efficient Pretraining Length Scaling

The paper addresses an under-explored opportunity in the pre-training phase of LLMs by leveraging length scaling, traditionally applied in the post-training stage. The authors propose a novel framework called the Parallel Hidden Decoding Transformer (PHD-Transformer), designed to efficiently manage length scaling during pre-training while preserving inference efficiency. This approach involves a sophisticated management strategy for key-value (KV) caches, enabling length scaling without increasing the computational footprint typically associated with other methods.

Overview of Methods and Contributions

At the core of the PHD-Transformer's architecture is an innovative KV cache management strategy. The authors differentiate between original tokens and hidden decoding tokens during training. This distinction is crucial, as only the original tokens' KV caches are retained for the modeling of long-range dependencies, while the KV caches of hidden decoding tokens are discarded immediately post-use. This results in maintaining the same KV cache size as the vanilla transformer, despite the repeated token input allowing for effective length scaling.

To bolster performance, the paper introduces two enhanced variants of the PHD-Transformer:

  1. PHD-SWA (Sliding Window Attention): This variant uses sliding window attention to maintain local dependencies efficiently. It requires only constant additional KV cache memory, which significantly boosts performance without sacrificing efficiency.
  2. PHD-CSWA (Chunk-wise Sliding Window Attention): By imposing chunk-wise constraints, this variant circumvents the linear growth in pre-filling time observed in PHD-SWA, as dependencies between tokens are limited to within each chunk.

Experimental Findings

Through extensive experiments, the paper demonstrates consistent superiority of the PHD-Transformer series across various benchmarks. Notably, the approach is shown to enhance performance on tasks measuring reasoning and mathematical problem-solving abilities. The empirical evaluation confirms that both loss scales and performance scales correlate strongly with the token repetition count. Importantly, both PHD-SWA and PHD-CSWA introduce only marginal increases in decoding latency, thus underscoring the efficiency of the approach.

Key results include training loss reductions and measurable accuracy improvements in benchmarks such as ARC, HellaSwag, PIQA, and Winogrande, with effectiveness shown for scaling factors of up to 256. The PHD-CSWA variant is particularly highlighted for balancing computational cost and performance benefits without imposing significant latency overhead.

Implications and Future Directions

The implications of this research are significant for the field. By achieving pre-training length scaling efficiently, the PHD-Transformer framework offers a viable pathway to further enhancing the capabilities of LLMs. The efficient use of KV cache and innovative token management strategies could lead to the next iteration of LLM development, focusing on maximizing computational depth while minimizing resource expenditure.

Practically, this methodology could be adopted in industrial applications requiring high efficiency and performance, such as real-time language processing systems and embedded AI solutions. Theoretically, it opens avenues for exploring deeper integrations of transformer architecture modifications and advanced attention patterns.

Overall, the presented research delivers critical insights and a robust framework for extending the scope and applicability of length scaling in LLMs, warranting further exploration and refinement in subsequent studies.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 19 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com