Efficient Hardware Realizations of Feedforward Artificial Neural Networks (2108.02073v1)
Abstract: This article presents design techniques proposed for efficient hardware implementation of feedforward artificial neural networks (ANNs) under parallel and time-multiplexed architectures. To reduce their design complexity, after the weights of ANN are determined in a training phase, we introduce a technique to find the minimum quantization value used to convert the floating-point weight values to integers. For each design architecture, we also propose an algorithm that tunes the integer weights to reduce the hardware complexity avoiding a loss in the hardware accuracy. Furthermore, the multiplications of constant weights by input variables are implemented under the shift-adds architecture using the fewest number of addition/subtraction operations found by prominent previously proposed algorithms. Finally, we introduce a computer-aided design (CAD) tool, called SIMURG, that can describe an ANN design in hardware automatically based on the ANN structure and the solutions of proposed design techniques and algorithms. Experimental results indicate that the tuning techniques can significantly reduce the ANN hardware complexity under a design architecture and the multiplierless design of ANN can lead to a significant reduction in area and energy consumption, increasing the latency slightly.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.