Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Introduction to Probabilistic Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications (1910.01059v2)

Published 2 Oct 2019 in cs.LG, cs.NE, eess.SP, and stat.ML

Abstract: Spiking neural networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by energy-efficient hardware implementations, which can offer significant energy reductions as compared to conventional artificial neural networks (ANNs). The design of training algorithms lags behind the hardware implementations. Most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pretrained ANNs via rate encoding. This article provides an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules by leveraging the unique time-encoding capabilities of SNNs. We adopt discrete-time probabilistic models for networked spiking neurons and derive supervised and unsupervised learning rules from first principles via variational inference. Examples and open research problems are also provided.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hyeryung Jang (24 papers)
  2. Osvaldo Simeone (326 papers)
  3. Brian Gardner (8 papers)
  4. AndrĂ© GrĂ¼ning (10 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.