Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaling Laws for Floating Point Quantization Training (2501.02423v1)

Published 5 Jan 2025 in cs.LG, cs.AR, and cs.CL
Scaling Laws for Floating Point Quantization Training

Abstract: Low-precision training is considered an effective strategy for reducing both training and downstream inference costs. Previous scaling laws for precision mainly focus on integer quantization, which pay less attention to the constituents in floating-point quantization and thus cannot well fit the LLM losses in this scenario. In contrast, while floating-point quantization training is more commonly implemented in production, the research on it has been relatively superficial. In this paper, we thoroughly explore the effects of floating-point quantization targets, exponent bits, mantissa bits, and the calculation granularity of the scaling factor in floating-point quantization training performance of LLM models. While presenting an accurate floating-point quantization unified scaling law, we also provide valuable suggestions for the community: (1) Exponent bits contribute slightly more to the model performance than mantissa bits. We provide the optimal exponent-mantissa bit ratio for different bit numbers, which is available for future reference by hardware manufacturers; (2) We discover the formation of the critical data size in low-precision LLM training. Too much training data exceeding the critical data size will inversely bring in degradation of LLM performance; (3) The optimal floating-point quantization precision is directly proportional to the computational power, but within a wide computational power range, we estimate that the best cost-performance precision lies between 4-8 bits.

An Analysis of "Scaling Laws for Floating-Point Quantization Training"

The paper entitled "Scaling Laws for Floating-Point Quantization Training" presents a comprehensive paper on the effects of floating-point quantization during the training of LLMs. In machine learning, reducing the precision of computations is a well-regarded tactic for decreasing both the resource demand and time overhead of training and inference. While anterior research predominantly emphasized integer quantization, often neglecting nuances specific to floating-point calculations, this work provides an incisive analysis of floating-point quantization effects, including the roles of exponent bits, mantissa bits, and scaling factor granularity.

Core Contributions and Findings

  1. Unified Scaling Law Derivation: The authors propose a comprehensive scaling law that accurately predicts the performance of LLMs when trained under varying floating-point quantization settings. This developed scaling law integrates five primary factors: data size (D), model size (N), exponent (E), mantissa (M), and scaling factor block size (B).
  2. Exponent vs. Mantissa Contribution: A significant insight from the investigation is that exponent bits have a slightly more substantial impact on model performance than mantissa bits do. This contributes to a derived optimal exponent-mantissa bit ratio, which can guide hardware manufacturing standards for low-precision computations.
  3. Data Size and Overfitting: The paper also uncovers a critical data size threshold beyond which performance degrades for low-precision training LLMs. This critical point is theoretically derived and practically validated, indicating a potential source of inefficiency or overfitting when exceeded.
  4. Precision and Computational Power Optimization: The paper finds that the relationship between floating-point quantization precision and computational power is proportionally linear. Within a certain computational power bandwidth, the optimal precision lies between 4-8 bits, which balances cost and performance effectively.

Methodological Details

The research meticulously designed a suite of experiments, totaling 366 runs across various LLM parameters, to underpin these findings. The models were tested on subsets of the Dolma V1.7 dataset, with varying combinations of exponent and mantissa settings while analyzing the effects of different quantization targets. The experimentations spanned several configurations to determine the nuanced impact of these parameters under the floating-point protocol.

Implications and Future Directions

In practical terms, the findings from this paper hold considerable value for the development of more efficient LLM training pipelines. They offer hardware designers and model trainers actionable insights to refine quantization techniques that inherently maintain model performance while curbing computational expenses.

Looking forward, the authors suggest that extending this scaling law to encompass larger models and different architectures could broaden its applicability. Moreover, verifying its robustness against more diverse datasets and novel floating-point architectures will be of interest. This research aligns with the ongoing pursuit for scalable and more sustainable AI methodologies, presenting pathways to refine the balance between model efficiency and cost-effectiveness in LLM training.

In conclusion, this paper significantly contributes to a more in-depth understanding of floating-point quantization's role in machine learning models. It bridges a crucial knowledge gap by proposing a well-substantiated scaling law that facilitates precision choices in floating-point arithmetic, consequently steering the future of low-precision LLM operations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Xingwu Sun (32 papers)
  2. Shuaipeng Li (11 papers)
  3. Ruobing Xie (97 papers)
  4. Weidong Han (8 papers)
  5. Kan Wu (42 papers)
  6. Zhen Yang (160 papers)
  7. Yixing Li (18 papers)
  8. An Wang (58 papers)
  9. Shuai Li (295 papers)
  10. Jinbao Xue (13 papers)
  11. Yu Cheng (354 papers)
  12. Yangyu Tao (19 papers)
  13. Zhanhui Kang (45 papers)
  14. Chengzhong Xu (98 papers)
  15. Di Wang (407 papers)
  16. Jie Jiang (246 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com