Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
101 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
28 tokens/sec
GPT-5 High Premium
27 tokens/sec
GPT-4o
101 tokens/sec
DeepSeek R1 via Azure Premium
90 tokens/sec
GPT OSS 120B via Groq Premium
515 tokens/sec
Kimi K2 via Groq Premium
220 tokens/sec
2000 character limit reached

S4NN: temporal backpropagation for spiking neural networks with one spike per neuron (1910.09495v4)

Published 21 Oct 2019 in cs.NE, cs.CV, cs.LG, and q-bio.NC

Abstract: We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi fully-connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, non-leaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN.

Citations (182)

Summary

An Overview of S4NN: Temporal Backpropagation for Spiking Neural Networks

The paper "S4NN: Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron," authored by Saeed Reza Kheradpisheh and Timothée Masquelier, presents a novel approach for the supervised training of spiking neural networks (SNNs). These networks are developed with a focus on temporal coding, specifically catering to rank-order coding, where the neurons fire exactly once, with their firing sequence carrying the information critical for class determination in the readout layer.

Core Contributions

This paper introduces a temporal learning rule akin to traditional backpropagation methods but adapted for the unique challenges posed by SNNs — particularly the non-differentiability at spike times. The authors design a system (S4NN) where neurons are non-leaky integrate-and-fire models, simplifying the computational overhead compared to prior neuron models. Notably, this system achieves remarkable performance, with test accuracies reported at 97.4% for the MNIST dataset and 99.2% for the Caltech Face/Motorbike dataset, which are at or near state-of-the-art levels for such networks.

Methodological Highlights

  • Neuron Model and Coding Scheme: The framework employs non-leaky integrate-and-fire neurons, which significantly reduce computational complexity. Neurons are constrained to fire at most once, relying on a sparse, asynchronous binary signal protocol that mimics biological neural communication.
  • Temporal Backpropagation: The authors transpose the classical error backpropagation strategy to a temporal domain. This involves defining and approximating error gradients based on spike latencies rather than activation values used in traditional ANNs.
  • Learning Process: The paper outlines a unique forward and backward path for S4NN, where the temporal order of spikes equates to encoded information. This gives rise to a dynamic process of weight updating based on calculated errors from neuron firing times.

Implications and Speculation

The research bears theoretical and practical implications. From a theoretical standpoint, it proposes a model that offers insights into biologically plausible learning mechanisms by demonstrating how neurons can rely on single-spike activity for complex tasks. Practically, it opens avenues for highly energy-efficient computations, as the S4NN model leverages simplified neural models conducive to edge computing applications.

This work suggests a potential shift in how SNNs can be utilized for tasks historically dominated by ANNs, particularly in environments where energy efficiency is paramount. Additionally, this approach's dependence on simpler neuron models may facilitate easier hardware implementation, aligning with ongoing developments in neuromorphic engineering.

Future Directions

The integration of S4NN into deeper architectures and its potential extension to convolutional networks were briefly explored. Further research could focus on enhancing the scalability of this neural model and applying it to a broader range of data types beyond image categorization. Another promising direction could be improving hardware implementations, leveraging the model's event-driven properties for streamlined and resource-efficient processing units.

This paper sets a foundational framework for further exploration into how temporal aspects of neural coding can be utilized to develop efficient, biologically inspired neural networks that are adaptable across different computational paradigms and real-world applications.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.