Dice Question Streamline Icon: https://streamlinehq.com

Predictive model of PTQ quantization degradation and contributing factors

Determine whether a predictive model of post-training quantization-induced degradation in large language models can be developed and identify the additional training or model factors that contribute to quantization degradation beyond those analyzed in this study.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper conducts a large-scale analysis of post-training quantization (PTQ) robustness across multiple open-source LLM training trajectories and controlled experiments. It finds that learning rate scheduling and certain training interventions such as weight averaging correlate with PTQ-induced degradation, while gradient norm magnitudes do not. Weight decay shows some correlation with improved robustness.

Despite these observations, the authors report erratic behavior in some training runs and state that it is still unclear whether a reliable predictive model for quantization degradation can be formulated or which other factors may be responsible for the variability.

References

As a result, it remains unclear whether a predictive model of quantization degradation is within reach, or what additional factors may be at play.

Training Dynamics Impact Post-Training Quantization Robustness (2510.06213 - Catalan-Tatjer et al., 7 Oct 2025) in Section 6 (Discussion)