Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Deep Spiking Neural Networks using Backpropagation (1608.08782v1)

Published 31 Aug 2016 in cs.NE

Abstract: Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation. However, training such networks is difficult due to the non-differentiable nature of asynchronous spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are only considered as noise. This enables an error backpropagation mechanism for deep SNNs, which works directly on spike signals and membrane potentials. Thus, compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statics of spikes more precisely. Our novel framework outperforms all previously reported results for SNNs on the permutation invariant MNIST benchmark, as well as the N-MNIST benchmark recorded with event-based vision sensors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jun Haeng Lee (4 papers)
  2. Tobi Delbruck (40 papers)
  3. Michael Pfeiffer (17 papers)
Citations (892)

Summary

Training Deep Spiking Neural Networks Using Backpropagation

The paper "Training Deep Spiking Neural Networks using Backpropagation" by Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer introduces a novel approach for training deep Spiking Neural Networks (SNNs). SNNs promise improved latency and energy efficiency over traditional Artificial Neural Networks (ANNs) due to their event-driven nature and asynchronous spike-based computation. Despite their potential, training SNNs has been challenging due to the non-differentiable nature of spike events, a fundamental barrier to applying error backpropagation directly.

Key Contributions

The authors propose a method wherein the membrane potentials of spiking neurons are treated as differentiable signals. Discontinuities at spike times are considered as noise, allowing the formulation of an error backpropagation mechanism that works directly on spike signals and membrane potentials. This approach aims to provide a more precise capture of spiking statistics compared to indirect training methods that rely on conversion from trained ANNs to SNNs.

Methodology

The paper introduces several innovations:

  • Spike-based Backpropagation: By regarding the spikes’ contribution to membrane potentials as differentiable signals, the authors bypass the traditional hurdle of non-differentiability in spike events. This allows for direct optimization using stochastic gradient descent (SGD).
  • Leaky Integrate-and-Fire (LIF) Neuron Model: The authors adopt the LIF model, defining its dynamics and contribution to the membrane potential in a manner that supports differentiation.
  • Winner-Take-All (WTA) Circuit: To enhance competition among neurons and improve network performance, the authors introduce WTA circuits with lateral inhibition in certain layers. This WTA architecture helps keep neuron activities balanced during training.
  • Parameter Initialization and Error Normalization: Given the theoretical constraints in SNNs, the authors propose specific initialization schemes and normalization techniques to handle the vanishing/exploding gradient problem, crucial for deep learning.
  • Regularization Techniques: The paper introduces weight and threshold regularization methods to ensure stability and promote balanced neuron participation during training.

Results

The proposed techniques were evaluated on the permutation invariant MNIST benchmark and the N-MNIST benchmark derived from event-based vision sensors. Key numerical results include:

  • Permutation Invariant MNIST: The best accuracy achieved was 98.77% using the ADAM optimization method, which matches the performance benchmarks of several state-of-the-art ANNs and significantly outpaces prior SNN models.
  • N-MNIST Benchmark: Without preprocessing to center the digits, the proposed method achieved a remarkable 98.53% accuracy, outperforming prior attempts that used preprocessed data.

Implications and Future Work

This work demonstrates that deep SNNs can achieve parity with conventional deep learning methods on complex tasks, potentially leveraging the efficiency benefits of event-based computation. The ability to directly train SNNs on spike signals makes them highly suitable for processing data from event-based sensors like Dynamic Vision Sensors (DVS).

Future research could expand on:

  • Architectural Innovations: Extending the proposed training methodology to Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
  • Neuromorphic Hardware: Leveraging efficient neuromorphic processors for implementing trained SNNs, which could yield substantial energy savings and performance benefits in real-time applications.

Conclusion

The novel spike-based backpropagation technique presented in this paper signifies a substantial step towards making SNNs more accessible for real-world applications. By addressing the training challenges inherent in SNNs and demonstrating competitive performance on established benchmarks, this work lays the groundwork for further advancements in deep spiking neural networks and their practical applications.