Papers
Topics
Authors
Recent
2000 character limit reached

STI-SNN: A 0.14 GOPS/W/PE Single-Timestep Inference FPGA-based SNN Accelerator with Algorithm and Hardware Co-Design (2506.08842v1)

Published 10 Jun 2025 in cs.AR

Abstract: Brain-inspired Spiking Neural Networks (SNNs) have attracted attention for their event-driven characteristics and high energy efficiency. However, the temporal dependency and irregularity of spikes present significant challenges for hardware parallel processing and data reuse, leading to some existing accelerators falling short in processing latency and energy efficiency. To overcome these challenges, we introduce the STI-SNN accelerator, designed for resource-constrained applications with high energy efficiency, flexibility, and low latency. The accelerator is designed through algorithm and hardware co-design. Firstly, STI-SNN can perform inference in a single timestep. At the algorithm level, we introduce a temporal pruning approach based on the temporal efficient training (TET) loss function. This approach alleviates spike disappearance during timestep reduction, maintains inference accuracy, and expands TET's application. In hardware design, we analyze data access patterns and adopt the output stationary (OS) dataflow, eliminating the need to store membrane potentials and access memory operations. Furthermore, based on the OS dataflow, we propose a compressed and sorted representation of spikes, then cached in the line buffer to reduce the memory access cost and improve reuse efficiency. Secondly, STI-SNN supports different convolution methods. By adjusting the computation mode of processing elements (PEs) and parameterizing the computation array, STI-SNN can accommodate lightweight models based on depthwise separable convolutions (DSCs), further enhancing hardware flexibility. Lastly, STI-SNN also supports both inter-layer and intra-layer parallel processing. For inter-layer parallelism, we ...

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.