Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Introduction to Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications (1812.03929v5)

Published 10 Dec 2018 in eess.SP, cs.IT, cs.LG, cs.NE, math.IT, and stat.ML

Abstract: Spiking Neural Networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by hardware implementations that have demonstrated significant energy reductions as compared to conventional Artificial Neural Networks (ANNs). Most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pre-trained ANNs via rate encoding. This paper aims at providing an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules leveraging the unique time encoding capabilities of SNNs. To this end, the paper adopts discrete-time probabilistic models for networked spiking neurons, and it derives supervised and unsupervised learning rules from first principles by using variational inference. Examples and open research problems are also provided.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hyeryung Jang (24 papers)
  2. Osvaldo Simeone (326 papers)
  3. Brian Gardner (8 papers)
  4. AndrĂ© GrĂ¼ning (10 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.