Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SLAYER: Spike Layer Error Reassignment in Time (1810.08646v1)

Published 5 Sep 2018 in cs.NE, cs.LG, and stat.ML

Abstract: Configuring deep Spiking Neural Networks (SNNs) is an exciting research avenue for low power spike event based computation. However, the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm. In this paper, we introduce a new general backpropagation mechanism for learning synaptic weights and axonal delays which overcomes the problem of non-differentiability of the spike function and uses a temporal credit assignment policy for backpropagating error to preceding layers. We describe and release a GPU accelerated software implementation of our method which allows training both fully connected and convolutional neural network (CNN) architectures. Using our software, we compare our method against existing SNN based learning approaches and standard ANN to SNN conversion techniques and show that our method achieves state of the art performance for an SNN on the MNIST, NMNIST, DVS Gesture, and TIDIGITS datasets.

Citations (694)

Summary

  • The paper presents a novel temporal credit assignment method that overcomes the non-differentiability challenge in SNN training.
  • It introduces a GPU-accelerated implementation for both fully connected and convolutional architectures to optimize performance.
  • Experimental results show state-of-the-art accuracy across diverse datasets like MNIST and DVS Gesture, highlighting its practical impact.

Overview of SLAYER: Spike Layer Error Reassignment in Time

SLAYER introduces a novel approach for training Spiking Neural Networks (SNNs), which are pivotal in developing efficient, low-power computation models. Traditional deep learning methods rely on backpropagation, a process predicated on differentiability, yet the inherent non-differentiability of spike generation in SNNs has posed a significant challenge. This paper addresses this limitation by implementing a backpropagation mechanism tailored to SNNs via temporal credit assignment, thus facilitating effective error propagation across multiple layers.

Methodology

The proposed approach innovatively resolves the differentiability issue by leveraging a temporal credit assignment policy. This mechanism reallocates errors based on time, allowing for the modification of synaptic weights and axonal delays. The introduction of this learning paradigm is complemented by a GPU-accelerated software implementation capable of training both fully connected and convolutional neural network architectures.

Experimental Results

The effectiveness of SLAYER is substantiated through comparative analysis across several datasets: MNIST, NMNIST, NCALTECH-101, DVS Gesture, and TIDIGITS. The paper highlights state-of-the-art performance metrics, surpassing existing SNN-based learning frameworks and traditional ANN-to-SNN conversion methods. These results underscore the potential of SLAYER's methodology in optimizing SNN training processes.

Implications and Future Directions

SLAYER's contributions are twofold: it addresses a fundamental limitation in SNN training while demonstrating practical performance improvements. This advancement invites future research to explore further optimizations and refinements of the temporal credit assignment process. Additionally, SLAYER's enhanced efficiency presents opportunities for deploying SNNs in real-world applications, particularly in edge computing environments where power efficiency is paramount.

In conclusion, SLAYER represents a substantial step forward in refining SNN training methodologies. The ability to overcome non-differentiability issues opens new avenues for research in neural network design, with promising implications for both theoretical developments and practical applications in artificial intelligence.

Youtube Logo Streamline Icon: https://streamlinehq.com