Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit Assignment (2312.09257v2)

Published 1 Dec 2023 in cs.NE, cs.LG, and q-bio.NC

Abstract: In this survey, we examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology. These processes are unified under one possible taxonomy, which is constructed based on how a learning algorithm answers a central question underpinning the mechanisms of synaptic plasticity in complex adaptive neuronal systems: where do the signals that drive the learning in individual elements of a network come from and how are they produced? In this unified treatment, we organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors and its known criticisms. The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes, wherein lies an important opportunity to build a strong bridge between machine learning, computational neuroscience, and cognitive science.

Citations (9)

Summary

  • The paper introduces a taxonomy that categorizes credit assignment methods into six families based on signal origins, unifying diverse approaches in neural adaptation.
  • The study highlights methods like Hebbian learning, feedback alignment, and predictive coding as effective, brain-like alternatives to traditional backpropagation.
  • It emphasizes future research on hybrid methodologies, including energy-based and forward-only approaches, to enhance robustness, generalization, and energy efficiency in AI.

Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit Assignment

The paper "Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit Assignment" by Alexander Ororbia offers a comprehensive exploration of algorithms for credit assignment in artificial neural networks (ANNs) inspired by neurobiological processes. The survey categorizes these algorithms, addressing the fundamental question of where the signals that drive learning in neural networks originate and how they are generated.

Taxonomy of Brain-Inspired Learning Schemes

The paper introduces a taxonomy of neurobiologically motivated credit assignment mechanisms, dividing them into six families based on their signal origins: implicit and explicit, with the latter further divided into global and local. This taxonomy is intended to provide a unified framework for understanding and comparing various approaches to neural network learning and adaptation.

Implicit Signals

The implicit signal family includes Hebbian learning, where synaptic changes are based purely on local neuron interactions. These methods depend on the correlation between pre-synaptic and post-synaptic activities, leading to simple and efficient updates that align well with biological plausibility. However, they often require mechanisms to stabilize weight growth, such as normalization or anti-Hebbian counter-pressures.

Explicit Global Signals

Explicit global signals include methods like feedback alignment, where random feedback weights are used instead of symmetric ones, addressing the weight transport problem. Neuromodulatory approaches, such as three-factor Hebbian plasticity, introduce modulatory signals, like dopamine, to drive learning, offering a biologically relevant model of synaptic adaptation. These approaches are crucial in forming connections with neuromodulatory learning theories aligning with reward-driven plasticity.

Non-Synergistic Local Signals

In non-synergistic local signals, credit assignment occurs through local mechanisms such as synthetic local updates, where error signals are synthesized locally within network layers. This category includes approaches that decouple forward and backward computations, promoting parallelism and addressing locking issues inherent in traditional backpropagation.

Synergistic Local Signals

Synergistic local signals involve more integrated learning processes, such as target propagation and predictive coding. Predictive coding, in particular, models brain processes where neural predictions are continuously refined by minimizing prediction errors. These methods often incorporate complex feedback pathways to achieve biological realism and solve multiple backpropagation issues simultaneously.

Energy-Based and Forward-Only Approaches

Energy-based approaches, like contrastive Hebbian learning and equilibrium propagation, rely on energy minimization principles, simulating phase-based learning found in biological systems. Forward-only approaches, a newer category, aim to use inference alone to drive learning by leveraging contrastive principles, promising a paradigm that minimizes energy and computational overhead.

Implications and Future Directions

This survey highlights the intricate relationship between neurobiological mechanisms and artificial neural computation. By constructing a bridge between these realms, the paper suggests opportunities for developing more robust and efficient brain-inspired AI systems. Future exploration could focus on hybrid methodologies that combine strengths across these families, extending applications to low-energy analog and neuromorphic hardware. As AI continues to evolve, understanding and integrating these biologically-plausible models into machine intelligence could lead to systems with improved generalization, robustness, and energy efficiency. The paper also opens avenues for interdisciplinary research, combining insights from machine learning, neuroscience, and cognitive science.

Youtube Logo Streamline Icon: https://streamlinehq.com