Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recurrent Neural Networks With Limited Numerical Precision (1611.07065v2)

Published 21 Nov 2016 in cs.NE

Abstract: Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high. Therefore, there is a great interest in optimizing the computations performed with these models especially when considering development of specialized low-power hardware for deep networks. One way of reducing the computational needs is to limit the numerical precision of the network weights and biases, and this will be addressed for the case of RNNs. We present results from the use of different stochastic and deterministic reduced precision training methods applied to two major RNN types, which are then tested on three datasets. The results show that the stochastic and deterministic ternarization, pow2- ternarization, and exponential quantization methods gave rise to low-precision RNNs that produce similar and even higher accuracy on certain datasets, therefore providing a path towards training more efficient implementations of RNNs in specialized hardware.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joachim Ott (5 papers)
  2. Zhouhan Lin (57 papers)
  3. Ying Zhang (389 papers)
  4. Shih-Chii Liu (44 papers)
  5. Yoshua Bengio (601 papers)
Citations (73)

Summary

We haven't generated a summary for this paper yet.