Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Power-Efficient Binary-Weight Spiking Neural Network Architecture for Real-Time Object Classification (2003.06310v1)

Published 12 Mar 2020 in eess.SP, cs.AR, cs.CV, and cs.NE

Abstract: Neural network hardware is considered an essential part of future edge devices. In this paper, we propose a binary-weight spiking neural network (BW-SNN) hardware architecture for low-power real-time object classification on edge platforms. This design stores a full neural network on-chip, and hence requires no off-chip bandwidth. The proposed systolic array maximizes data reuse for a typical convolutional layer. A 5-layer convolutional BW-SNN hardware is implemented in 90nm CMOS. Compared with state-of-the-art designs, the area cost and energy per classification are reduced by 7$\times$ and 23$\times$, respectively, while also achieving a higher accuracy on the MNIST benchmark. This is also a pioneering SNN hardware architecture that supports advanced CNN architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pai-Yu Tan (1 paper)
  2. Po-Yao Chuang (1 paper)
  3. Yen-Ting Lin (117 papers)
  4. Cheng-Wen Wu (2 papers)
  5. Juin-Ming Lu (1 paper)
Citations (7)

Summary

We haven't generated a summary for this paper yet.