Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge (1908.02386v1)

Published 6 Aug 2019 in cs.LG, cs.NE, and stat.ML

Abstract: Low-precision DNNs have been extensively explored in order to reduce the size of DNN models for edge devices. Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision in [5..8]-bits. However, previous studies were limited to studying posit for DNN inference only. In this paper, we propose the Cheetah framework, which supports both DNN training and inference using posits, as well as other commonly used formats. Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats. Cheetah is evaluated on three datasets: MNIST, Fashion MNIST, and CIFAR-10. Results indicate that 16-bit posits outperform 16-bit floating point in DNN training. Furthermore, performing inference with [5..8]-bit posits improves the trade-off between performance and energy-delay-product over both [5..8]-bit float and fixed-point.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hamed F. Langroudi (5 papers)
  2. Zachariah Carmichael (17 papers)
  3. David Pastuch (1 paper)
  4. Dhireesha Kudithipudi (31 papers)
Citations (23)