Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network (2003.01811v2)

Published 25 Feb 2020 in cs.NE, cs.CV, and cs.LG
RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network

Abstract: Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. The best performing SNNs for image recognition tasks are obtained by converting a trained Analog Neural Network (ANN), consisting of Rectified Linear Units (ReLU), to SNN composed of integrate-and-fire neurons with "proper" firing thresholds. The converted SNNs typically incur loss in accuracy compared to that provided by the original ANN and require sizable number of inference time-steps to achieve the best accuracy. We find that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference. We propose ANN-SNN conversion using "soft reset" spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the "residual" membrane potential above threshold at the firing instants. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10 (93.63% top-1), CIFAR-100 (70.93% top-1), and ImageNet (73.09% top-1 accuracy). Our results also show that RMP-SNN surpasses the best inference accuracy provided by the converted SNN with "hard reset" spiking neurons using 2-8 times fewer inference time-steps across network architectures and datasets.

RMP-SNN: Enhancing Spiking Neural Networks with Residual Membrane Potential Neurons

The paper "RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network" by Bing Han, Gopalakrishnan Srinivasan, and Kaushik Roy offers a significant methodological advancement in the domain of Spiking Neural Networks (SNNs). With the increasing demand for energy-efficient computational models, SNNs have garnered attention due to their biological plausibility and low-power operation, especially beneficial for neuromorphic hardware. This paper introduces the Residual Membrane Potential (RMP) neuron model, which aims to substantially mitigate accuracy loss typically associated with the conversion from Analog Neural Networks (ANNs) to SNNs.

Problem Addressed and Methodological Innovation

The primary challenge addressed in this work is the information loss occurring during the conversion process of ANNs, composed of Rectified Linear Unit (ReLU) activations, to traditional Integrate-and-Fire (IF) neuron-based SNNs. This conversion often results in decreased inference accuracy and increased latency due to inefficiencies in the traditional "hard reset" mechanism used in IF neurons. The authors identify that such a reset mechanism leads to unnecessary information loss, affecting the performance of the spiking network.

The innovative aspect of this paper lies in proposing a "soft reset" mechanism for spiking neurons, referred to as Residual Membrane Potential (RMP) neurons. Unlike "hard reset," where excess potential above the threshold is irrecoverably lost, RMP neurons retain this residual potential, thereby preserving information through spiking events. This methodological shift enables the near loss-less conversion of ANNs into SNNs, maintaining high accuracy with significantly reduced latency.

Empirical Performance and Results

The paper provides extensive empirical validation of the proposed RMP-SNNs on benchmark datasets, namely CIFAR-10, CIFAR-100, and ImageNet, utilizing prominent architectures like VGG-16 and various ResNet configurations. The results demonstrate that RMP-SNNs achieve comparable or superior accuracy compared to their ANN counterparts, with minimal accuracy loss post conversion. For instance, VGG-16 architecture achieved accuracy levels of 93.63% on CIFAR-10 and 73.09% on ImageNet, effectively narrowing down the conversion accuracy gap to below 0.5% in most cases.

Moreover, the conversion to RMP-SNNs generally required fewer inference time-steps compared to traditional SNNs, with reductions ranging from 2 to 8 times across various networks and datasets. This reduction is achieved with negligible increases in overall spiking activity (less than 2%), highlighting the efficiency of the RMP mechanism.

Implications and Future Work

The introduction of RMP neurons offers significant implications for the practical deployment of SNNs in real-world applications, particularly where computational efficiency and energy consumption are paramount. This advancement opens pathways for the application of SNNs in more complex domains previously limited by accuracy and latency issues.

From a theoretical standpoint, the paper's contribution adds a layer of understanding regarding neuron dynamics, particularly in mirroring the analog computation potential in discrete spiking events. This could inspire further explorations into more complex neuron models that can mimic other nonlinear activations present in ANNs.

Future developments could investigate the integration of RMP neurons in hardware design, optimizing them for neuromorphic chips that leverage the advantages of low-power and high-efficiency computing. Moreover, applying RMP-based conversions to broader neural network tasks, such as sequential data processing or more intricate generative tasks, could broaden the scope and potential of SNNs further.

In conclusion, the proposed RMP-SNN framework presents a robust bridge between ANNs and SNNs, showcasing the potential for SNNs to effectively scale and perform in parity with, or even exceed, traditional neural network models in various application domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bing Han (74 papers)
  2. Gopalakrishnan Srinivasan (15 papers)
  3. Kaushik Roy (265 papers)
Citations (286)