Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QFT: Post-training quantization via fast joint finetuning of all degrees of freedom (2212.02634v1)

Published 5 Dec 2022 in stat.ML, cs.CV, and cs.LG

Abstract: The post-training quantization (PTQ) challenge of bringing quantized neural net accuracy close to original has drawn much attention driven by industry demand. Many of the methods emphasize optimization of a specific degree-of-freedom (DoF), such as quantization step size, preconditioning factors, bias fixing, often chained to others in multi-step solutions. Here we rethink quantized network parameterization in HW-aware fashion, towards a unified analysis of all quantization DoF, permitting for the first time their joint end-to-end finetuning. Our single-step simple and extendable method, dubbed quantization-aware finetuning (QFT), achieves 4-bit weight quantization results on-par with SoTA within PTQ constraints of speed and resource.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Alex Finkelstein (2 papers)
  2. Ella Fuchs (2 papers)
  3. Idan Tal (2 papers)
  4. Mark Grobman (5 papers)
  5. Niv Vosco (2 papers)
  6. Eldad Meller (2 papers)
Citations (5)