QFT: Post-training quantization via fast joint finetuning of all degrees of freedom (2212.02634v1)
Abstract: The post-training quantization (PTQ) challenge of bringing quantized neural net accuracy close to original has drawn much attention driven by industry demand. Many of the methods emphasize optimization of a specific degree-of-freedom (DoF), such as quantization step size, preconditioning factors, bias fixing, often chained to others in multi-step solutions. Here we rethink quantized network parameterization in HW-aware fashion, towards a unified analysis of all quantization DoF, permitting for the first time their joint end-to-end finetuning. Our single-step simple and extendable method, dubbed quantization-aware finetuning (QFT), achieves 4-bit weight quantization results on-par with SoTA within PTQ constraints of speed and resource.
- Alex Finkelstein (2 papers)
- Ella Fuchs (2 papers)
- Idan Tal (2 papers)
- Mark Grobman (5 papers)
- Niv Vosco (2 papers)
- Eldad Meller (2 papers)