Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation (1811.10678v2)

Published 23 Oct 2018 in cs.NE, cs.LG, eess.SP, and stat.ML

Abstract: Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multi-layer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to generate desired spike trains in response to a given input. To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next, it is reformulated to efficiently train multi-layer SNNs, and is shown to be effectively performing spatio-temporal error backpropagation. The learning rule is validated by training $2$-layer SNNs to solve a spike based formulation of the XOR problem as well as training $3$-layer SNNs for generic spike based training problems. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of efficient event-triggered learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Navin Anwani (1 paper)
  2. Bipin Rajendran (50 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.