Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Radically New Theory of how the Brain Represents and Computes with Probabilities (1701.07879v4)

Published 26 Jan 2017 in q-bio.NC, cs.CV, and cs.NE

Abstract: The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (cell assembly), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a canonical cortical circuit/algorithm model achieving efficient sequence learning/recognition, but not elaborated as an alternative to PPC theories. Here, we show that: a) the active SDR simultaneously represents both the most similar/likely input and the entire (coarsely-ranked) similarity likelihood/distribution over all stored inputs (hypotheses); and b) given an input, the SDR code selection algorithm, which underlies both learning and inference, updates both the most likely hypothesis and the entire likelihood distribution (cf. belief update) with a number of steps that remains constant as the number of stored items increases.

Citations (2)

Summary

We haven't generated a summary for this paper yet.