Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain (2009.05359v5)

Published 11 Sep 2020 in cs.NE, cs.AI, cs.LG, and q-bio.NC

Abstract: The backpropagation of error algorithm (backprop) has been instrumental in the recent success of deep learning. However, a key question remains as to whether backprop can be formulated in a manner suitable for implementation in neural circuitry. The primary challenge is to ensure that any candidate formulation uses only local information, rather than relying on global signals as in standard backprop. Recently several algorithms for approximating backprop using only local signals have been proposed. However, these algorithms typically impose other requirements which challenge biological plausibility: for example, requiring complex and precise connectivity schemes, or multiple sequential backwards phases with information being stored across phases. Here, we propose a novel algorithm, Activation Relaxation (AR), which is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system. Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, utilises only a single parallel backwards relaxation phase, and can operate on arbitrary computation graphs. We illustrate these properties by training deep neural networks on visual classification tasks, and describe simplifications to the algorithm which remove further obstacles to neurobiological implementation (for example, the weight-transport problem, and the use of nonlinear derivatives), while preserving performance.

Citations (16)

Summary

  • The paper presents a novel AR algorithm that approximates backpropagation gradients using only local neural signals.
  • The paper demonstrates that AR achieves learning performance on MNIST and FashionMNIST comparable to standard backpropagation.
  • The paper shows that simplifying weight-transport and nonlinear derivative requirements enhances biological plausibility without compromising efficacy.

Exploring Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain

The paper "Activation Relaxation: A Local Dynamical Approximation to Backpropagation in the Brain" presents a novel approach to bridge the gap between the biological plausibility of neural learning algorithms and the effectiveness of backpropagation in deep learning architectures. The authors, Millidge et al., propose an algorithm named Activation Relaxation (AR), which aims to address the challenges of implementing backpropagation using only local information—a key consideration for any potential implementation in biological neural networks.

Key Contributions

The Activation Relaxation algorithm is founded on the principle of constructing backpropagation gradients as the equilibrium point of a dynamical system. This innovative approach sidesteps the need for complex or non-local dependencies, which often challenge the biological plausibility of standard backpropagation implementations. Key features of the AR algorithm include:

  • Local Information Utilization: The AR algorithm operates using exclusively local information, thereby reducing reliance on global signals typically required in backpropagation.
  • Single Computational Unit: Unlike many other biologically plausible algorithms, which might require distinct neural populations for values and errors, AR utilises a homogeneous neural structure.
  • Convergence to Backpropagation Gradients: The algorithm converges rapidly and accurately to backpropagation gradients, demonstrating its capability to solve the credit assignment problem efficiently.

Empirical Evaluation

The research showcases the efficacy of the AR algorithm by empirically evaluating its performance on standard visual classification datasets, specifically MNIST and FashionMNIST. These experiments demonstrate that AR can train deep neural networks with a performance comparable to traditional backpropagation, indicating that the algorithm successfully enables deep network training using only local updates.

Simplification and Biological Plausibility

The paper also explores simplifications of the AR algorithm that enhance its biological plausibility:

  • Addressing the Weight-Transport Problem: By replacing precise weight transport with random or learnable feedback weights, the authors show that the AR algorithm maintains performance without necessitating the exact symmetry of forward and backward weights.
  • Nonlinear Derivative Omissions: The algorithm performance remains robust even when nonlinear derivatives in updates are omitted, suggesting that precise nonlinear activation derivatives might not be as critical for learning.

Theoretical and Practical Implications

The Activation Relaxation algorithm posits significant implications for both theoretical neuroscience and machine learning. By demonstrating that local learning rules can approximate backpropagation effectively, this work advocates for the potential compatibility of neural computation paradigms with biological processes. Theoretically, it challenges and enriches existing paradigms such as the NGRAD hypothesis, providing an alternative framework for understanding gradient computation.

From a practical standpoint, the AR algorithm offers a more biologically plausible framework that could inspire the design of neuromorphic systems and influence future developments in autonomous learning agents capable of operating with limited supervision and feedback.

Future Directions

The research opens several avenues for future exploration. Subsequent studies might focus on applying the AR algorithm to more complex architectures and datasets to evaluate its scalability and adaptability. Moreover, investigating the synchronization of the feedforward and relaxation phases in the AR algorithm, particularly in continuous environments, could enhance its applicability within dynamic systems.

In summary, the Activation Relaxation algorithm offers a substantial stride towards approximating backpropagation through biologically plausible means. It combines theoretical innovation with empirical validation, providing a foundation upon which further advancements in biologically-inspired learning algorithms can be built.