Quantized Training Scaling Laws
- Quantized training scaling laws are a framework that predicts loss reduction as a power-law function of model parameters, training tokens, and quantization precision.
- They unify various compression techniques by replacing raw parameter count with an effective capacity measure, enabling precise performance trade-off analysis under quantization-induced error.
- Empirical models, including those for quantization-induced degradation and optimal QAT allocation, guide the design of efficient low-precision large language models.
Quantized training scaling laws describe the predictable interplay between model size, data volume, resource constraints, and the effects of quantization on neural network performance. Modern research has established a suite of unified, empirically validated scaling laws that account for quantization-induced error, the structure of quantization schemes, and the allocation of training compute in quantized regimes. These laws enable precise predictions of cross-entropy loss, effective parameter scaling, performance trade-offs under precision constraints, and the emergence (and limitations) of low-precision models in large-scale machine learning.
1. Quantization Hypotheses and Classical Scaling Laws
The foundational quantized training scaling law extends neural scaling theory by attributing the power-law decrease of loss to the sequential acquisition of discrete “quanta” of knowledge or skill. The quantization model is built on three mathematical hypotheses (Michaud et al., 2023):
- Discreteness (QH1): Model loss depends solely on which quanta (indexed by use frequency) have been learned.
- Ordered Learning (QH2): For learned quanta, the model has acquired the most frequently required abilities; remaining quanta are not yet learned.
- Zipfian Use Frequencies (QH3): The probability that an example uses the -th quantum follows a Zipf law:
where is the Riemann zeta function.
Aggregate test loss after learning quanta is then: where and are the losses after and before learning each quantum.
For a model with parameters (each quantum requiring capacity), , yielding: This formalizes the power-law scaling of loss with model size and extends to data scaling and optimization step scaling.
2. Unified Scaling Laws for Quantized and Compressed Training
Recent work has generalized the above to encompass arbitrary model compression, including quantization, sparsity, and their compositions. The key principle is to replace the dense parameter count with an effective parameter count , determined by a capacity factor reflecting the representational efficiency under compression format (Panferov et al., 2 Jun 2025, Frantar et al., 23 Feb 2025): where:
- , , , , are task- and domain-specific constants;
- quantifies representation capacity.
For uniform -bit scalar quantization: For , , , (Frantar et al., 23 Feb 2025).
Joint sparse-quantization and vector quantization are compositional:
A capacity mapping can be empirically fit from root mean square error when compressing Gaussian data, providing a unified axis to compare disparate formats (Panferov et al., 2 Jun 2025).
3. Explicit Power-Law Formulas: Effects of Tokens, Model Size, and Bit Width
Direct post-training quantization introduces quantization-induced degradation (QiD), with a precisely measured scaling for low-bit LLM checkpoints (Ouyang et al., 26 Nov 2024): with empirical fits: where : non-embedding model parameters, : training tokens, : quantization bit-width.
Key empirical consequences:
- QiD grows steeply with for fixed (),
- QiD shrinks slightly with (),
- QiD falls off rapidly with ().
Calculated with tokens, even 4-bit models at have nats, indicating severe limits as LLMs become more fully trained. Undertrained checkpoints exhibit far less degradation under quantization; hence QiD can serve as a practical proxy for training completeness.
4. Floating-Point Quantization: Structure, Critical Data Size, and Compute-Optimality
Precision-structured floating-point quantization reveals further refinements (Sun et al., 5 Jan 2025). The validation loss under floating-point quantization is modeled as: with , , : exponent and mantissa bits, : block size for scaling.
Exponent bits contribute more to loss reduction than mantissa. The optimal split for given total bits is: For FP8: EM and FP4: EM represent optimal layouts.
A critical data size emerges, beyond which quantization penalty dominates and further training data increases loss: For lower-precision settings, can be within realistic datasets, yielding U-shaped loss curves.
Over fixed compute , the precision that optimizes cost-performance is in the $4$–$8$ bit range under a wide set of configurations.
5. Quantization-Aware Training: Scaling Laws, Bottlenecks, and Mixed-Precision
Quantization-aware training (QAT) is characterized by a distinct scaling law for quantization error (difference in loss relative to full-precision) (Chen et al., 20 May 2025): with : parameters, : training tokens, : quantization group size. Fitted parameters (for W4A4) are , , , .
- Larger reduces ().
- Increasing increases , implying that quantized models lag their full-precision counterparts as data grows.
- Coarser quantization groupings severely deteriorate accuracy; the effect is dominant in activation quantization.
Decomposition into weights and activations shows that activation error is strongly sensitive to due to outliers, particularly in the FC2 layer. Applying mixed precision (e.g., using 8-bit only for the FC2-Proj input) greatly mitigates this granularity-induced error.
Given a target loss penalty , required can be computed as: allowing direct trade-off planning between , , and .
6. Compute-Optimality and Allocation in QAT Regimes
Optimally dividing training between a full-precision (FP) phase and a QAT phase leads to nontrivial, predictable scaling (Dremov et al., 26 Sep 2025). The key variable is the tokens-per-parameter-byte statistic: where : total training tokens; : parameters; : QAT bit-width.
The compute-optimal fraction of training allocated to QAT, , is given by: The final loss is modeled as: where the quantization penalty has terms for irreducible error, pure-QAT adaptation, and FP–QAT interaction. The law achieves and for four distinct bit-widths and multiple model sizes.
With this framework, it is possible to produce closed-form allocation for optimal QAT under compute and memory budgets, and to determine the best bit-width for a fixed deployment constraint.
7. Practical Design Guidance and Phase Boundary Implications
- Parameter Multipliers: Effective model size under quantization scales as for weights or for weight-activation pairs, with empirical efficiency falling sharply below 4 bits.
- Critical Regimes: As LLMs surpass – tokens, post hoc low-bit quantization becomes impractical except for extremely large models, unless QAT is performed during training.
- Phase Diagrams: Bit-width/size trade-offs permit Pareto-optimal configurations under storage or compute constraints (e.g. 4–8 bit floating-point formats often optimal for training and inference).
- Mixed-Precision: Employ mixed-precision only in layers with observed heavy-tailed activation distributions (notably FC2), to mitigate error concentration without excessive memory overhead.
- QAT Planning: Allocate QAT phase proportionally to tokens-per-parameter-byte for maximal performance; early full-precision followed by late QAT, or a fusion scheme with staged learning-rate decay, minimizes loss at fixed compute.
- Unified Capacity Axis: The “capacity” metric, related to the mean-squared-error of the representation, provides a universal axis for model comparison and training prescription across quantized, sparse, and jointly-compressed formats.
These scaling laws provide a rigorous, predictive formalism for evaluating, designing, and training quantized and compressed LLMs under realistic compute and deployment constraints.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free