Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VSA: Reconfigurable Vectorwise Spiking Neural Network Accelerator (2205.00780v1)

Published 2 May 2022 in cs.AR

Abstract: Spiking neural networks (SNNs) that enable low-power design on edge devices have recently attracted significant research. However, the temporal characteristic of SNNs causes high latency, high bandwidth and high energy consumption for the hardware. In this work, we propose a binary weight spiking model with IF-based Batch Normalization for small time steps and low hardware cost when direct training with input encoding layer and spatio-temporal back propagation (STBP). In addition, we propose a vectorwise hardware accelerator that is reconfigurable for different models, inference time steps and even supports the encoding layer to receive multi-bit input. The required memory bandwidth is further reduced by two-layer fusion mechanism. The implementation result shows competitive accuracy on the MNIST and CIFAR-10 datasets with only 8 time steps, and achieves power efficiency of 25.9 TOPS/W.

Citations (7)

Summary

We haven't generated a summary for this paper yet.