Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization (2306.04879v1)

Published 8 Jun 2023 in cs.LG

Abstract: Efficiently serving neural network models with low latency is becoming more challenging due to increasing model complexity and parameter count. Model quantization offers a solution which simultaneously reduces memory footprint and compute requirements. However, aggressive quantization may lead to an unacceptable loss in model accuracy owing to differences in sensitivity to numerical imperfection across different layers in the model. To address this challenge, we propose a mixed-precision post training quantization (PTQ) approach that assigns different numerical precisions to tensors in a network based on their specific needs, for a reduced memory footprint and improved latency while preserving model accuracy. Previous works rely on layer-wise Hessian information to determine numerical precision, but as we demonstrate, Hessian estimation is typically insufficient in determining an effective ordering of layer sensitivities. We address this by augmenting the estimated Hessian with additional information to capture inter-layer dependencies. We demonstrate that this consistently improves PTQ performance along the accuracy-latency Pareto frontier across multiple models. Our method combines second-order information and inter-layer dependencies to guide a bisection search, finding quantization configurations within a user-configurable model accuracy degradation range. We evaluate the effectiveness of our method on the ResNet50, MobileNetV2, and BERT models. Our experiments demonstrate latency reductions compared to a 16-bit baseline of $25.48\%$, $21.69\%$, and $33.28\%$ respectively, while maintaining model accuracy to within $99.99\%$ of the baseline model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Clemens JS Schaefer (10 papers)
  2. Navid Lambert-Shirzad (2 papers)
  3. Xiaofan Zhang (79 papers)
  4. Chiachen Chou (3 papers)
  5. Tom Jablin (2 papers)
  6. Jian Li (667 papers)
  7. Elfie Guo (2 papers)
  8. Caitlin Stanton (2 papers)
  9. Siddharth Joshi (28 papers)
  10. Yu Emma Wang (9 papers)
Citations (2)