Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dendritic cortical microcircuits approximate the backpropagation algorithm (1810.11393v1)

Published 26 Oct 2018 in q-bio.NC, cs.LG, and cs.NE

Abstract: Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience. However, the main learning mechanism behind these advances - error backpropagation - appears to be at odds with neurobiology. Here, we introduce a multilayer neuronal network model with simplified dendritic compartments in which error-driven synaptic plasticity adapts the network towards a global desired output. In contrast to previous work our model does not require separate phases and synaptic learning is driven by local dendritic prediction errors continuously in time. Such errors originate at apical dendrites and occur due to a mismatch between predictive input from lateral interneurons and activity from actual top-down feedback. Through the use of simple dendritic compartments and different cell-types our model can represent both error and normal activity within a pyramidal neuron. We demonstrate the learning capabilities of the model in regression and classification tasks, and show analytically that it approximates the error backpropagation algorithm. Moreover, our framework is consistent with recent observations of learning between brain areas and the architecture of cortical microcircuits. Overall, we introduce a novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. João Sacramento (27 papers)
  2. Rui Ponte Costa (13 papers)
  3. Yoshua Bengio (601 papers)
  4. Walter Senn (23 papers)
Citations (300)

Summary

Review of "Dendritic cortical microcircuits approximate the backpropagation algorithm"

In this manuscript, the authors propose a novel computational model inspired by the known structural and functional properties of neocortical pyramidal cells. The model showcases an innovative mechanism for error-driven learning in neuronal circuits that strives to reconcile the biological implausibility criticisms tied to traditional error backpropagation approaches used in deep learning. The suggested model integrates neural approximations of error backpropagation within simplified multicompartment representations of cortical microcircuits.

The core of this paper involves a three-compartment model for pyramidal neurons, capturing somatic, basal, and apical dendritic integration zones. Here, basal dendrites receive bottom-up input while apical dendrites are influenced by top-down projections, with another intermediate system of interneurons facilitating error signaling. This configuration utilizes apical dendrites for encoding predictive errors, thus allowing each neuron to participate simultaneously in both forward activity propagation and backward error transmission. These errors arise due to discrepancies between the expected lateral inputs from local interneurons and actual top-down signals.

Critically, the framework aligns well with cortical microcircuit topology and is consistent with recent neuroscientific findings indicating that distal dendritic compartments might indeed contribute to error signaling and learning processes. The analytical work within this paper demonstrates that, under certain parameters, the suggested learning rules in these microcircuits approximate the functionality of error backpropagation, particularly in the weak feedback limit.

The researchers substantiate their theoretical insight by implementing the model in various computational experiments encompassing regression and classification challenges, most notably the MNIST digit recognition task. Throughout these simulations, the model not only aligns qualitatively with the predictions of the conventional backpropagation algorithm but also leverages novel dynamics observed in feedback alignment, where top-down and bottom-up weights naturally adjust to enhance learning despite initial asymmetry.

From a practical perspective, this research provides insights into potential neuromorphic hardware implementations for AI systems. Moreover, understanding synaptic plasticity processes implicit in the proposed model might inspire more efficient learning algorithms with continuous-time input, working beyond the phase-separated strategies seen in traditional machine learning algorithms.

On the theoretical front, the work adds to the growing body of research aimed at merging artificial neural network training principles with biological plausibility. Specifically, it contributes to resolving the synaptic credit assignment problem by suggesting a plausible biological framework for its computation via dendritic processing. Looking to the future, the general framework of dendritic error backpropagation could inform the development of new algorithms tailored to time-continuous learning models, perhaps leading to more biologically-aligned learning systems.

The manuscript opens several avenues for further investigation, such as exploring variants of the network where parameters are tuned for heightened biological realism or experimenting with diverse task paradigms to gauge the versatility of the modeled approach. The establishment of a more comprehensive neuroscientific foundation for interpreting error signals at the dendritic level remains an intriguing pursuit, with potential to impact both computational neuroscience and machine learning paradigms.