Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stochasticity from function -- why the Bayesian brain may need no noise (1809.08045v3)

Published 21 Sep 2018 in q-bio.NC, cond-mat.dis-nn, cs.NE, physics.bio-ph, and stat.ML

Abstract: An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dominik Dold (22 papers)
  2. Ilja Bytschok (8 papers)
  3. Akos F. Kungl (6 papers)
  4. Andreas Baumbach (10 papers)
  5. Oliver Breitwieser (14 papers)
  6. Walter Senn (23 papers)
  7. Johannes Schemmel (67 papers)
  8. Karlheinz Meier (34 papers)
  9. Mihai A. Petrovici (44 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.