Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Predictable Scale: Part I -- Optimal Hyperparameter Scaling Law in Large Language Model Pretraining (2503.04715v5)

Published 6 Mar 2025 in cs.LG and cs.AI

Abstract: The impressive capabilities of LLMs across diverse tasks are now well-established, yet their effective deployment necessitates careful hyperparameter optimization. Through extensive empirical studies involving grid searches across diverse configurations, we discover universal scaling laws governing these hyperparameters: optimal learning rate follows a power-law relationship with both model parameters and data sizes, while optimal batch size scales primarily with data sizes. Our analysis reveals a convex optimization landscape for hyperparameters under fixed models and data size conditions. This convexity implies an optimal hyperparameter plateau. We contribute a universal, plug-and-play optimal hyperparameter tool for the community. Its estimated values on the test set are merely 0.09% away from the globally optimal LLM performance found via an exhaustive search. These laws demonstrate remarkable robustness across variations in model sparsity, training data distribution, and model shape. To our best known, this is the first work that unifies different model shapes and structures, such as Mixture-of-Experts models and dense transformers, as well as establishes optimal hyperparameter scaling laws across diverse data distributions. This exhaustive optimization process demands substantial computational resources, utilizing nearly one million NVIDIA H800 GPU hours to train 3,700 LLMs of varying sizes and hyperparameters from scratch and consuming approximately 100 trillion tokens in total. To facilitate reproducibility and further research, we will progressively release all loss measurements and model checkpoints through our designated repository https://step-law.github.io/

The paper "Predictable Scale: Part I — Optimal Hyperparameter Scaling Law in LLM Pretraining" introduces a new hyperparameter scaling law, termed Step Law, for pretraining LLMs. The authors posit that Step Law can be used as a plug-and-play tool for optimizing the learning rate and batch size in LLM pretraining.

The paper's primary claims and contributions include:

  1. Convexity of the Hyperparameter Loss Landscape: The research demonstrates that the loss landscape, with respect to the learning rate and batch size, exhibits convexity under fixed model parameters and data size conditions. This convexity suggests the existence of an optimal hyperparameter plateau.
  2. Universal Hyperparameter Scaling Laws (Step Law): The paper introduces a universal and robust hyperparameter scaling law applicable across variations in model sparsity, training data distribution, and model shape. Step Law posits that the optimal learning rate, η(N,D)\eta(N, D), and batch size, B(D)B(D), follow power-law relationships:

    η(N,D)=1.79N0.713D0.307\eta(N, D) = 1.79N^{-0.713}D^{0.307}

    B(D)=0.58D0.571B(D) = 0.58D^{0.571}

    where:

* NN is the number of non-embedding parameters in the model * DD is the dataset size in tokens.

The scaling laws suggest that the optimal batch size primarily depends on the dataset size, while the optimal learning rate depends on both model parameters and dataset size.

  1. Transferability and Invariance Across Data Distributions and Model Architectures: The paper investigates the transferability of optimal hyperparameter scaling laws across different pretraining data distributions and model architectures. The findings suggest that Step Law maintains high generalizability and robustness across different corpora distributions, model architectures, and both dense and sparse (MoE) LLMs with varying sparsity ratios.
  2. Extensive Empirical Validation: The conclusions are supported by a large-scale empirical paper involving:
    • Experiments across 3,700 model configurations, training LLMs from scratch with dense and MoE architectures (varying sparsity ratios), data distributions, and hyperparameter settings.
    • A compute consumption approaching 1 million H800 GPU hours, processing approximately 100 trillion tokens during training.

The paper compares Step Law with existing hyperparameter scaling approaches, including OpenAI Law, Microsoft Law, DeepSeek Law, Porian Law, MiniCPM Law, and MeiTuan Law. The comparison focuses on factors such as suitability for different data recipes, model sparsity, and relative error in loss prediction.

The paper uses the following notation:

  • L\mathcal{L}: Cross-entropy loss
  • DD: Dataset size in tokens
  • NN: Number of non-embedding parameters in the model
  • N^\hat{N}: Total number of parameters in the model
  • CC: Compute budget in FLOPs
  • NlayerN_{layer}: Number of layers in the Transformer model
  • dffd_{ff}: Dimension of the feed-forward network hidden layer in the Transformer
  • dmodeld_{model}: Hidden dimension of the Transformer model
  • NheadN_{head}: Number of attention heads in the Transformer model
  • η(N,D)\eta(N, D): Optimal peak learning rate for a given parameter count NN and dataset size DD
  • B(N,D)B(N, D): Optimal batch size (in tokens) for a given parameter count NN and dataset size DD

The paper details the experimental setup, including the dataset composition (web text, mathematical content, and code), Byte Pair Encoding (BPE) tokenizer, model architecture (RMSNorm, SwiGLU activation function, ALiBi positional encoding), and optimizer (AdamW). The learning rate schedule includes a linear warmup phase followed by a cosine decay.

The ablation experiments validate the use of smoothed training loss as an unbiased estimate of validation loss and demonstrate the convexity of the loss landscape with respect to the learning rate and batch size. The authors also justify the use of a fixed final learning rate strategy.

The paper demonstrates the topological invariance of the hyperparameter scaling law across varied model shapes by conducting controlled experiments with different model shape combinations (number of layers, attention heads, feed-forward network dimensions). Additionally, the paper investigates the sparsity independence of the hyperparameter scaling law in MoE models across different sparsity levels and model shapes. The results show that Step Law consistently achieves a low relative prediction error across all sparsity levels. Finally, the paper assesses the robustness of Step Law across varied data distributions by designing three distinct data distributions: bilingual corpus, code integration, and code-dominant. The formula maintains predictive accuracy across all three distributions.

The authors acknowledge the limitations of their empirical approach and call for future work to develop a theoretical understanding of the observed power-law relationships.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Houyi Li (10 papers)
  2. Qiufeng Wang (36 papers)
  3. Hanshan Zhang (3 papers)
  4. Zili Wang (52 papers)
  5. Shuigeng Zhou (81 papers)
  6. Xiangyu Zhang (328 papers)
  7. Daxin Jiang (138 papers)
  8. Wenzhen Zheng (5 papers)
  9. Shijie Xuyang (4 papers)
  10. Yuantao Fan (8 papers)
Github Logo Streamline Icon: https://streamlinehq.com