Pretraining LLMs is a computationally intensive process, and effectively tuning hyperparameters like learning rate (LR) across different model scales is a significant challenge. Maximal Update Parametrization (P) (Yang et al., 2022 ) was proposed to address this by providing scaling rules for initialization and LR that theoretically enable hyperparameter transfer with model width (embedding dimension ). However, empirical studies applying P to LLMs have shown conflicting results, particularly regarding the optimal embedding layer LR.
This paper, "Optimal Embedding Learning Rate in LLMs: The Effect of Vocabulary Size" (Hayou et al., 17 Jun 2025 ), investigates why P's predictions for LLMs might be inaccurate. The authors identify a key limitation in the standard P theory: it assumes a fixed input dimension (vocabulary size ) while only scaling model width . In practice, LLM vocabulary sizes are often much larger than model width, and the relationship between and is not fixed across all scales and datasets. Furthermore, the embedding layer acts as a lookup table, meaning updates are heavily influenced by token frequencies, a factor not fully captured by traditional P analysis.
The paper provides a theoretical analysis using a simplified linear model consisting only of embedding and projection layers, trained with an Adam-like optimizer (specifically, SignSGD for tractability). They analyze how the magnitude of updates to the embedding and projection weights scale as both model width and vocabulary size become large. This analysis reveals two distinct regimes:
- P Regime: When vocabulary size is fixed relative to , the update magnitudes for both embedding and projection layers scale with . This aligns with the conditions under which P was derived, suggesting optimal LRs scaling as and (for hidden/projection layers) to achieve updates.
- Large Vocabulary (LV) Regime: When vocabulary size scales significantly with (e.g., ) or is much larger (), the analysis shows that the update magnitude for the embedding layer scales approximately as , while the projection layer update still scales as . This difference arises due to the effect of large and token frequency on the element-wise normalization used in Adam-like optimizers.
This theoretical finding suggests that in the LV regime, which the authors argue is more representative of modern LLMs, the optimal scaling for the embedding LR should be different from P's prediction. To maintain balanced feature learning updates across layers, the embedding LR () should be scaled relative to the hidden/projection LR () such that . This contrasts with P's suggested ratio of and the standard practice (Standard Parametrization, SP) which often uses a ratio of .
Based on this, the authors propose a Large Vocabulary Parametrization (LVP). While the theoretical analysis used a simplified model and optimizer, the authors hypothesize that the core finding regarding the embedding layer's sensitivity to vocabulary size carries over to full transformer architectures trained with Adam. LVP uses Standard Parametrization-like initialization () and P-like LR scaling for hidden and output layers (), but incorporates the -rule for the embedding layer LR (). This results in the desired ratio.
The paper validates these theoretical findings with experiments:
- Small Model Scaling with Vocabulary: They trained a small transformer model while scaling both width and vocabulary size such that grows linearly with . By sweeping embedding LR () while fixing hidden/projection LRs according to LVP (), they observed that the optimal indeed decreases sublinearly with , aligning more closely with the behavior predicted by their theory than with P's constant prediction (\cref{fig:emb_lr_vocab_scaling}).
- Production-Scale 1B Model Pretraining: To assess the practical benefit, they trained a 1B parameter dense transformer model (with , so ) on a large, production-scale dataset (1.75T tokens, used for Phi-3 (Munkhdalai et al., 10 Apr 2024 )). A baseline model used the conventional practice of applying the same LR across all layers (), while their experimental model used . The results showed that the model trained with the ratio for the embedding LR achieved consistently lower training loss and better perplexity on the Wikitext test set compared to the baseline (\cref{fig:training_ppl}, \cref{fig:test_ppl}). Experiments with other ratios confirmed was near-optimal.
Practical Implementation:
- The key takeaway for practitioners is that when pretraining LLMs with large vocabularies, the embedding layer's learning rate should likely be higher than that of the hidden and projection layers.
- Specifically, the paper suggests setting the ratio of the embedding LR to the hidden/projection LR to be approximately , where is the embedding dimension (model width). If using an Adam-like optimizer with global LR and standard scaling for hidden layers, the embedding LR could be set to and hidden/output LRs to .
- The authors' LVP parametrization suggests combining SP-like initialization variance () with P-like LR scaling ( for hidden/output) and the -rule for the embedding LR (). However, the empirical results for the 1B model primarily focus on the ratio of LRs, implying that adjusting the embedding LR relative to others is the most critical aspect.
- Experimentation with different LR ratios around may still be necessary to find the absolute optimum for a specific model size, architecture, and dataset (\cref{fig:training_ppl_different_ratios}).
Limitations and Future Work:
- The theoretical analysis uses a simplified linear model and SignSGD, and extending it to full transformers with Adam is complex.
- Optimal scaling rules might depend on the training step , not just and .
- The analysis highlights that optimal LR scaling is sensitive to token frequencies, suggesting that more advanced parametrizations might benefit from explicitly incorporating this information.
- While the -rule improves training efficiency, it is not proven to guarantee perfect hyperparameter transfer across all scales and datasets, unlike the theoretical claim of P (though P itself shows limitations in practice for LLMs).
In conclusion, the paper provides both theoretical evidence and empirical validation showing that vocabulary size is a critical factor influencing the optimal embedding learning rate in LLMs. It challenges the universality of P scaling in the large vocabulary setting and proposes a practical heuristic – setting the embedding LR roughly times higher than hidden/projection LRs – that demonstrably improves training performance for large LLMs.