Papers
Topics
Authors
Recent
2000 character limit reached

Efficient Hardware Realizations of Feedforward Artificial Neural Networks (2108.02073v1)

Published 4 Aug 2021 in cs.AR

Abstract: This article presents design techniques proposed for efficient hardware implementation of feedforward artificial neural networks (ANNs) under parallel and time-multiplexed architectures. To reduce their design complexity, after the weights of ANN are determined in a training phase, we introduce a technique to find the minimum quantization value used to convert the floating-point weight values to integers. For each design architecture, we also propose an algorithm that tunes the integer weights to reduce the hardware complexity avoiding a loss in the hardware accuracy. Furthermore, the multiplications of constant weights by input variables are implemented under the shift-adds architecture using the fewest number of addition/subtraction operations found by prominent previously proposed algorithms. Finally, we introduce a computer-aided design (CAD) tool, called SIMURG, that can describe an ANN design in hardware automatically based on the ANN structure and the solutions of proposed design techniques and algorithms. Experimental results indicate that the tuning techniques can significantly reduce the ANN hardware complexity under a design architecture and the multiplierless design of ANN can lead to a significant reduction in area and energy consumption, increasing the latency slightly.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.