Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accuracy to Throughput Trade-offs for Reduced Precision Neural Networks on Reconfigurable Logic (1807.10577v1)

Published 17 Jul 2018 in cs.CV

Abstract: Modern CNN are typically based on floating point linear algebra based implementations. Recently, reduced precision NN have been gaining popularity as they require significantly less memory and computational resources compared to floating point. This is particularly important in power constrained compute environments. However, in many cases a reduction in precision comes at a small cost to the accuracy of the resultant network. In this work, we investigate the accuracy-throughput trade-off for various parameter precision applied to different types of NN models. We firstly propose a quantization training strategy that allows reduced precision NN inference with a lower memory footprint and competitive model accuracy. Then, we quantitatively formulate the relationship between data representation and hardware efficiency. Our experiments finally provide insightful observation. For example, one of our tests show 32-bit floating point is more hardware efficient than 1-bit parameters to achieve 99% MNIST accuracy. In general, 2-bit and 4-bit fixed point parameters show better hardware trade-off on small-scale datasets like MNIST and CIFAR-10 while 4-bit provide the best trade-off in large-scale tasks like AlexNet on ImageNet dataset within our tested problem domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jiang Su (4 papers)
  2. Nicholas J. Fraser (6 papers)
  3. Giulio Gambardella (12 papers)
  4. Michaela Blott (31 papers)
  5. Gianluca Durelli (1 paper)
  6. David B. Thomas (3 papers)
  7. Philip Leong (5 papers)
  8. Peter Y. K. Cheung (7 papers)
Citations (22)