Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Direct Training for Spiking Neural Networks: Faster, Larger, Better (1809.05793v2)

Published 16 Sep 2018 in cs.NE

Abstract: Spiking neural networks (SNNs) that enables energy efficient implementation on emerging neuromorphic hardware are gaining more attention. Yet now, SNNs have not shown competitive performance compared with artificial neural networks (ANNs), due to the lack of effective learning algorithms and efficient programming frameworks. We address this issue from two aspects: (1) We propose a neuron normalization technique to adjust the neural selectivity and develop a direct learning algorithm for deep SNNs. (2) Via narrowing the rate coding window and converting the leaky integrate-and-fire (LIF) model into an explicitly iterative version, we present a Pytorch-based implementation method towards the training of large-scale SNNs. In this way, we are able to train deep SNNs with tens of times speedup. As a result, we achieve significantly better accuracy than the reported works on neuromorphic datasets (N-MNIST and DVS-CIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10). {To our best knowledge, this is the first work that demonstrates direct training of deep SNNs with high performance on CIFAR10, and the efficient implementation provides a new way to explore the potential of SNNs.

Direct Training for Spiking Neural Networks: Faster, Larger, Better

The paper presents an innovative approach to directly training spiking neural networks (SNNs), addressing key challenges related to the development of effective learning algorithms and efficient programming frameworks. The research demonstrates significant improvements in both training performance and application accuracy, underscoring the potential of SNNs for neuromorphic applications.

Key Contributions

The paper focuses on the following areas:

  1. Neuron Normalization Technique: The introduction of the NeuNorm method addresses challenges in balancing neuronal activity and enhancing model performance. This method normalizes input strength across feature maps, offering a targeted solution for SNNs that is more bio-plausible and compatible with neuromorphic hardware.
  2. Explicitly Iterative LIF Model: By converting the leaky integrate-and-fire (LIF) model into an iteratively explicit form, the authors facilitate integration with mainstream machine learning frameworks like PyTorch. This conversion addresses the implicit nature of the traditional continuous LIF model, making it computationally tractable for large-scale SNNs.
  3. Rate Coding Optimization: The research refines existing rate coding techniques, significantly reducing simulation lengths required for satisfactory model performance. Encoding and decoding enhancements allow for precise representation even with shorter time steps, crucial for large-scale SNNs.
  4. PyTorch Implementation: Employing PyTorch not only accelerates the training process but also scales the networks effectively, allowing deeper architectures and improved results on both spiking and non-spiking datasets.

Results and Implications

The paper reports that using the proposed methodologies, SNNs trained directly on frameworks such as PyTorch achieve substantial speedup over traditional tools like Matlab, with tens of times acceleration in training times. On neuromorphic datasets N-MNIST and DVS-CIFAR10, the research achieves state-of-the-art accuracy, marking a significant advancement in SNN performance over previous indirect training approaches.

For non-spiking datasets like CIFAR10, the results are comparable to state-of-the-art ANNs, strengthening the argument for SNNs' potential in broader applications. These strides are made without the need for excessive simulation lengths, offering practical gains in both computational efficiency and energy consumption.

Theoretical and Practical Implications

The paper's findings indicate that SNNs can achieve competencies similar to ANNs with appropriate training techniques and frameworks. Practically, these systems are well-suited for deployment on power-efficient hardware, opening avenues in various applications like real-time processing in constrained environments. Theoretically, the refined understanding of direct training mechanisms enhances our conceptual foundations of neuromorphic computing.

Future Directions

The research paves the way for further exploration in:

  • Scalability: Extending these methodologies to even larger datasets and more complex tasks will be crucial.
  • Hardware Integration: Seamless integration with neuromorphic platforms could move these theoretical advancements towards practical application.
  • Algorithmic Improvements: Continued refinement of training algorithms and normalization techniques could further boost learning efficiency and network performance.

In conclusion, this work makes significant strides in advancing the performance of spiking neural networks, offering both theoretical insight and practical utility in the field of neuromorphic computing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yujie Wu (34 papers)
  2. Lei Deng (81 papers)
  3. Guoqi Li (90 papers)
  4. Jun Zhu (424 papers)
  5. Luping Shi (21 papers)
Citations (587)