Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
93 tokens/sec
Gemini 2.5 Pro Premium
54 tokens/sec
GPT-5 Medium
22 tokens/sec
GPT-5 High Premium
17 tokens/sec
GPT-4o
101 tokens/sec
DeepSeek R1 via Azure Premium
91 tokens/sec
GPT OSS 120B via Groq Premium
441 tokens/sec
Kimi K2 via Groq Premium
225 tokens/sec
2000 character limit reached

Revisiting Reset Mechanisms in Spiking Neural Networks for Sequential Modeling: Specialized Discretization for Binary Activated RNN (2504.17751v4)

Published 24 Apr 2025 in cs.NE and cs.AI

Abstract: In the field of image recognition, spiking neural networks (SNNs) have achieved performance comparable to conventional artificial neural networks (ANNs). In such applications, SNNs essentially function as traditional neural networks with quantized activation values. This article focuses on an another alternative perspective,viewing SNNs as binary-activated recurrent neural networks (RNNs) for sequential modeling tasks. From this viewpoint, current SNN architectures face several fundamental challenges in sequence modeling: (1) Traditional models lack effective memory mechanisms for long-range sequence modeling; (2) The biological-inspired components in SNNs (such as reset mechanisms and refractory period applications) remain theoretically under-explored for sequence tasks; (3) The RNN-like computational paradigm in SNNs prevents parallel training across different timesteps. To address these challenges, this study conducts a systematic analysis of the fundamental mechanisms underlying reset operations and refractory periods in binary-activated RNN-based SNN sequence models. We re-examine whether such biological mechanisms are strictly necessary for generating sparse spiking patterns, provide new theoretical explanations and insights, and ultimately propose the fixed-refractory-period SNN architecture for sequence modeling.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets