Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks (2002.10085v4)

Published 24 Feb 2020 in cs.NE and cs.LG

Abstract: Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNN error backpropagation (BP) methods lack proper handling of spiking discontinuities and suffer from low performance compared with the BP methods for traditional artificial neural networks. In addition, a large number of time steps are typically required to achieve decent performance, leading to high latency and rendering spike-based computation unscalable to deep architectures. We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies and leads to improved temporal learning precision. It captures inter-neuron dependencies through presynaptic firing times by considering the all-or-none characteristics of firing activities and captures intra-neuron dependencies by handling the internal evolution of each neuronal state in time. TSSL-BP efficiently trains deep SNNs within a much shortened temporal window of a few steps while improving the accuracy for various image classification datasets including CIFAR10.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Wenrui Zhang (20 papers)
  2. Peng Li (390 papers)
Citations (202)

Summary

  • The paper introduces TSSL-BP, advancing SNN training by partitioning backpropagation into inter- and intra-neuron dependencies for efficient temporal learning.
  • It reduces computational latency by using as few as 5 time steps, achieving up to a 3.98% accuracy improvement on CIFAR10 compared to traditional methods.
  • The research paves the way for energy-efficient neuromorphic computing and next-generation AI by aligning backpropagation with biological spiking dynamics.

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks: An Overview

This paper introduces a sophisticated approach to training deep Spiking Neural Networks (SNNs) through a newly proposed method named Temporal Spike Sequence Learning Backpropagation (TSSL-BP). Designed to tackle the intricate computational dynamics of SNNs, TSSL-BP offers a refined algorithmic strategy that accommodates the spatio-temporal characteristics unique to spiking neurons. The proposed method not only improves training efficiency but also ensures accuracy across various image classification datasets, such as CIFAR10.

Key Contributions

  1. Framework and Methodology: The TSSL-BP method is designed to address limitations in existing SNN backpropagation techniques. These limitations include inadequate handling of spiking discontinuities and high latency due to the necessity of a large number of time steps. The TSSL-BP method innovates by partitioning the backpropagation process into inter-neuron and intra-neuron dependencies, allowing it to better capture the temporal and spatial dynamics of neuronal activity.
  2. Inter-neuron and Intra-neuron Dependencies:
    • Inter-neuron Dependencies: This involves understanding how presynaptic firing times influence postsynaptic firing actions. The method considers the all-or-none firing characteristics, which contributes to efficient error propagation.
    • Intra-neuron Dependencies: TSSL-BP also handles the internal evolution of a neuron's state across time, providing a detailed account of how successive firings within a neuron affect its downstream outputs.
  3. Implementation and Efficiency: The implementation focuses on maintaining high precision in temporal learning with a significantly reduced number of time steps (as few as 5), leading to ultra-low latency spike computations. This is contrasted against conventional SNN methods, which often demand hundreds of computational steps. This reduction in computational overhead manifests in improved runtime efficiency and reduced energy dissipation.

Experimental Results

The paper reports notable improvements in classification accuracy over several benchmark datasets:

  • CIFAR10: The TSSL-BP method achieves up to a 3.98% increase compared to previously reported SNN techniques, emphasizing its enhanced precision and computational scalability.
  • MNIST family: Across datasets like MNIST, N-MNIST, and FashionMNIST, the TSSL-BP shows competitive results, even when executed with minimal temporal windows.

Implications and Future Prospects

The method proposed in this paper could have significant implications for both theoretical and practical advancements in the field of neuromorphic computing:

  • Theoretical: By resolving intrinsic limitations in temporal sequence learning, this research can broaden the understanding of backpropagation in neural models that closely parallel biological processes.
  • Practical: The marked improvement in latency and accuracy positions TSSL-BP as a valuable tool for deploying SNNs in real-world applications, particularly when run on energy-efficient neuromorphic hardware platforms.

Looking forward, TSSL-BP creates pathways for refining spike-based learning algorithms, potentially influencing the development of next-generation artificial intelligence systems that require less power, offer parallel processing capabilities, and demonstrate more human-like processing patterns. The availability of the TSSL-BP codebase to the research community further aids in validating these methods and catalyzing new research directions.

In conclusion, the TSSL-BP method represents a significant step toward efficient, precise training of deep spiking neural networks, promising substantial impacts in SNN research and application in neuromorphic engineering.