Papers
Topics
Authors
Recent
Search
2000 character limit reached

Supervised Learning in Multilayer Spiking Neural Networks

Published 10 Feb 2012 in cs.NE and q-bio.NC | (1202.2249v1)

Abstract: The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.

Citations (191)

Summary

  • The paper presents a novel supervised learning algorithm for multilayer spiking neural networks, enabling backpropagation for networks with multiple spiking neurons in all layers.
  • The method extends ReSuMe for multilayer SNNs using STDP-based error backpropagation, proving effective on benchmarks such as XOR and Iris classification.
  • The algorithm is robust to noisy temporal patterns and shows promise for applications in temporal processing and hardware-efficient neuromorphic computing.

Supervised Learning in Multilayer Spiking Neural Networks

The paper under consideration presents a novel supervised learning algorithm designed for multilayer spiking neural networks (SNNs). This development addresses a crucial gap in the field: the need for effective learning strategies applicable to neurons capable of multiple spike firings, which can be adapted to various linearizable neuron models. Unlike many existing solutions, the algorithm supports multiple-layer networks with spiking neurons across all layers firing multiple spikes, representing a significant step towards enhanced computational power and flexibility in SNNs.

Overview of the Existing Context and Challenges

Traditional approaches in neural networks largely focused on rate-coded neurons, where the frequency of neuron firing determines the analog variable representation. The introduction of backpropagation led these networks to excel in flexibility and computational power. However, mounting experimental evidence suggests biological neurons encode information through precise spike timing rather than merely the firing rate. This has opened the possibility of adopting temporal encoding with single spikes for simulating neural networks, which researchers like Maass have shown to provide computationally superior models.

The prior algorithms, such as SpikeProp, were limited to one spike per neuron, which constrained their application to multilayer networks. Moreover, algorithms like the tempotron learning rule were mainly confined to single-layer networks, hindering their applicability in more complex architectures. The ReSuMe algorithm offered a blend of STDP (Spike-Timing-Dependent Plasticity) and Hebbian learning principles but was restricted to single-layer networks.

The Proposed Algorithm and Methodology

The new algorithm extends the capabilities of previous work by enabling backpropagation of errors through a network with hidden layers of spiking neurons. The learning rule updates weights based on STDP and anti-STDP processes. Importantly, this algorithm expands upon ReSuMe by facilitating error backpropagation in a multilayer context, an essential adaptation for networks designed to handle more complex input-output mappings.

Key theoretical advancements involve deriving gradient descent rules in continuous time, with extensions that encompass delayed sub-connections and synaptic scaling to stabilize neuron activity. This approach ensures the spike timing and count of output neurons align closely with target spike trains, thus improving the accuracy and efficiency of learning in SNNs.

Numerical Results and Implications

Numerical simulations presented in the paper include well-known benchmarks such as the XOR problem and Iris data classification. These tests confirm the algorithm’s efficacy in tackling non-linear problems and complex classification tasks with different spike timing encodings. When applied to spike train patterns as a form of encoding logical operations, the multilayer ReSuMe showcases significant improvements in learning speed and accuracy over earlier models like SpikeProp.

The paper also highlights that the new algorithm can learn from noisy temporal patterns, demonstrating its robustness to variability and noise, akin to biological systems. In practical terms, the capability to classify time-varying patterns over hundreds of milliseconds can lead to advancements in temporal processing applications, such as speech recognition and sensorimotor learning.

Conclusion and Future Directions

This research paves the way for further explorations into SNNs with hidden layers, offering a scalable learning approach adaptable to diverse neuron models. The capability to handle more complex spiking patterns opens up potential applications in hardware-efficient neural computation like neuromorphic computing.

Future work could explore generalizing the algorithm for even more complex architectures such as recurrent networks, leveraging the principles of backpropagation through time. This development would further expand the potential applications of SNNs in areas requiring intricate temporal dynamics and adaptive behaviors.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.