Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks (1805.07866v6)

Published 21 May 2018 in cs.NE and cs.LG

Abstract: Spiking neural networks (SNNs) are positioned to enable spatio-temporal information processing and ultra-low power event-driven neuromorphic hardware. However, SNNs are yet to reach the same performances of conventional deep artificial neural networks (ANNs), a long-standing challenge due to complex dynamics and non-differentiable spike events encountered in training. The existing SNN error backpropagation (BP) methods are limited in terms of scalability, lack of proper handling of spiking discontinuities, and/or mismatch between the rate-coded loss function and computed gradient. We present a hybrid macro/micro level backpropagation (HM2-BP) algorithm for training multi-layer SNNs. The temporal effects are precisely captured by the proposed spike-train level post-synaptic potential (S-PSP) at the microscopic level. The rate-coded errors are defined at the macroscopic level, computed and back-propagated across both macroscopic and microscopic levels. Different from existing BP methods, HM2-BP directly computes the gradient of the rate-coded loss function w.r.t tunable parameters. We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26]. HM2-BP achieves an accuracy level of 99.49% and 98.88% for MNIST and N-MNIST, respectively, outperforming the best reported performances obtained from the existing SNN BP algorithms. Furthermore, the HM2-BP produces the highest accuracies based on SNNs for the EMNIST [3] dataset, and leads to high recognition accuracy for the 16-speaker spoken English letters of TI46 Corpus [16], a challenging patio-temporal speech recognition benchmark for which no prior success based on SNNs was reported. It also achieves competitive performances surpassing those of conventional deep learning models when dealing with asynchronous spiking streams.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yingyezhe Jin (1 paper)
  2. Wenrui Zhang (20 papers)
  3. Peng Li (390 papers)
Citations (170)

Summary

  • The paper introduces HM2-BP, a hybrid algorithm that computes gradients at both the macro and micro levels to enhance spiking neural network training.
  • It leverages spike-train level post-synaptic potentials for accurate temporal processing alongside firing rate coding for spatial integration.
  • Results demonstrate superior accuracy on MNIST, N-MNIST, and TI46 datasets, setting a new benchmark in neuromorphic computing.

Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks

The paper "Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks" presents a novel algorithm, known as Hybrid Macro/Micro Level Backpropagation (HM2-BP), which essentially addresses the longstanding challenge of efficiently training spiking neural networks (SNNs). SNNs are lauded for their potential in low-power spatio-temporal processing and their applicability to neuromorphic hardware—distinguishing them from the conventional artificial neural networks (ANNs). However, training SNNs involves complex temporal dynamics and discrete, non-differentiable spike events, which have historically limited their performance.

Key Contributions

The paper introduces HM2-BP, an algorithm designed to compute and backpropagate errors across both macro and micro levels within deep spiking neural networks. This method is distinct from existing backpropagation methods because it directly computes the gradient of the rate-coded loss function concerning tunable parameters—a significant departure from traditional approaches that often smooth out spikes or ignore temporal correlations.

  • Micro-level Processing: This level accurately captures spiking impacts using spike-train level post-synaptic potential (S-PSP), incorporating spike timing into the training process rather than relying on averaging or rate-based estimates.
  • Macro-level Processing: Errors are defined based on firing rates, allowing for backpropagation that considers both spatial integration and temporal precision. This dual-level processing provides a bridge between firing rate codes and discrete spike events, facilitating more effective training.

Numerical Results

The proposed HM2-BP algorithm shows remarkable accuracy levels of 99.49% and 98.88% on the static MNIST and dynamic neuromorphic N-MNIST datasets, respectively. These results surpass previous SNN training algorithms and even compete favorably against standard deep learning models in scenarios involving asynchronous spiking data streams. Notably, HM2-BP yields high recognition accuracy for challenging benchmarks such as the 16-speaker TI46 Corpus, a task with no prior SNN success reported.

Implications and Future Work

This research offers significant implications for the advancement of spiking neural networks, particularly in fields requiring efficient processing of dynamic temporal information, such as robotics and real-time signal processing. The hybrid approach provides a scalable solution to SNN training, offering insights into handling spiking discontinuities and improving correspondence between rate-coded loss functions and computed gradients.

Future work could explore extending this hybrid methodology to more extensive neural architectures and assess its performance on other neuromorphic datasets. Moreover, investigations into further optimizing the balance between micro-level temporal precision and macro-level rate-coded efficiency could uncover new pathways for enhancing neuromorphic computing capabilities.

Through rigorous evaluation and comparisons against existing methods, HM2-BP has set a new benchmark for training effectiveness and computational efficiency in spiking neural networks, bridging the gap between biological realism and machine learning performance. As the community continues to refine such algorithms, the potential for deployed neuromorphic systems becomes increasingly feasible, promising advancements in AI applications where power efficiency and accurate spatio-temporal processing are paramount.