Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Review of Neuroscience-Inspired Machine Learning (2403.18929v1)

Published 16 Feb 2024 in cs.NE and cs.LG

Abstract: One major criticism of deep learning centers around the biological implausibility of the credit assignment schema used for learning -- backpropagation of errors. This implausibility translates into practical limitations, spanning scientific fields, including incompatibility with hardware and non-differentiable implementations, thus leading to expensive energy requirements. In contrast, biologically plausible credit assignment is compatible with practically any learning condition and is energy-efficient. As a result, it accommodates hardware and scientific modeling, e.g. learning with physical systems and non-differentiable behavior. Furthermore, it can lead to the development of real-time, adaptive neuromorphic processing systems. In addressing this problem, an interdisciplinary branch of artificial intelligence research that lies at the intersection of neuroscience, cognitive science, and machine learning has emerged. In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks, discussing the solutions they provide for different scientific fields as well as their advantages on CPUs, GPUs, and novel implementations of neuromorphic hardware. We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (85)
  1. Bengio, Y. How auto-encoders could provide credit assignment in deep networks via target propagation. arXiv preprint arXiv:1407.7906 (2014).
  2. Bengio, Y. Deriving differential target propagation from iterating approximate inverses. arXiv preprint arXiv:2007.15139 (2020).
  3. Credit assignment through time: Alternatives to backpropagation. Advances in neural information processing systems 6 (1993).
  4. predify. https://github.com/miladmozafari/predify, 2020.
  5. Advancing neuromorphic computing with loihi: A survey of results and outlook. Proceedings of the IEEE 109, 5 (2021), 911–934.
  6. The helmholtz machine. Neural computation 7, 5 (1995), 889–904.
  7. Maximum likelihood from incomplete data via the EM algorithm. Journal of the royal statistical society: series B 39 (1977).
  8. Swarm intelligence. Elsevier, 2001.
  9. Towards scaling difference target propagation by learning backprop targets, 2022.
  10. Silicon photonic architecture for training deep neural networks with direct feedback alignment. Optica 9, 12 (2022), 1323–1332.
  11. Analog circuits to accelerate the relaxation process in the equilibrium propagation algorithm. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS) (2020), IEEE, pp. 1–5.
  12. Friston, K. Learning and inference in the brain. Neural Networks 16, 9 (2003), 1325–1352.
  13. Friston, K. A theory of cortical responses. Philosophical Transactions of the Royal Society B: Biological Sciences 360, 1456 (2005).
  14. Neuromorphic spintronics. Nature electronics 3, 7 (2020), 360–370.
  15. Hinton, G. The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345 (2022).
  16. Hinton, G. E. Training products of experts by minimizing contrastive divergence. Neural computation 14, 8 (2002), 1771–1800.
  17. Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343 (2016).
  18. Stiff-pinn: Physics-informed neural network for stiff chemical kinetics. The Journal of Physical Chemistry A 125, 36 (2021), 8098–8106.
  19. Hebbian deep learning without feedback. arXiv preprint arXiv:2209.11883 (2022).
  20. Block-local learning with probabilistic latent representations. arXiv preprint arXiv:2305.14974 (2023).
  21. Kohan, A. Signal Propagation: The Library for Forward Learning in Neural Networks, 2022.
  22. Signal propagation: The framework for learning and inference in a forward pass. IEEE Transactions on Neural Networks and Learning Systems (2023).
  23. Error forward-propagation: Reusing feedforward connections to propagate errors in deep learning. arXiv preprint arXiv:1808.03357 (2018).
  24. Scaling equilibrium propagation to deep convnets by drastically reducing its gradient estimator bias. Frontiers in neuroscience (2021).
  25. Holomorphic equilibrium propagation computes exact gradients through finite size oscillations. arXiv:2209.00530 (2022).
  26. Direct feedback alignment scales to modern deep learning tasks and architectures. Advances in neural information processing systems 33 (2020), 9346–9360.
  27. Training dynamical binary neural networks with equilibrium propagation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 4640–4649.
  28. Difference target propagation. In Proc. ECMLPKDD (2015).
  29. How important is weight symmetry in backpropagation? In Proc. AAAI (2016).
  30. Backpropagation and the brain. Nature Reviews Neuroscience 21 (04 2020).
  31. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications 7, 1 (2016), 1–10.
  32. Linnainmaa, S. The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. Master’s Thesis (in Finnish), Univ. Helsinki (1970), 6–7.
  33. Eqspike: spike-driven equilibrium propagation for neuromorphic implementations. Iscience 24, 3 (2021), 102222.
  34. Brain-inspired computing needs a master plan. Nature (2022).
  35. A theoretical framework for target propagation. Advances in Neural Information Processing Systems 33 (2020), 20024–20036.
  36. Backpropagation at the infinitesimal inference limit of energy-based models: Unifying predictive coding, equilibrium propagation, and contrastive Hebbian learning. In International Conference on Learning Representations, 2023 (2023).
  37. Predictive coding approximates backprop along arbitrary computation graphs. Neural Computation 34, 6 (2022), 1329–1368.
  38. Softhebb: Bayesian inference in unsupervised hebbian soft winner-take-all networks. Neuromorphic Computing and Engineering 2, 4 (2022), 044017.
  39. Feedback alignment in deep convolutional networks. arXiv:1812.06488 (2018).
  40. Movellan, J. R. Contrastive Hebbian learning in the continuous Hopfield model. In Connectionist Models. Elsevier, 1991, pp. 10–17.
  41. Event-driven contrastive divergence for spiking neuromorphic systems. Frontiers in neuroscience 7 (2014), 272.
  42. Event-driven random back-propagation: Enabling neuromorphic deep learning machines. Frontiers in neuroscience 11 (2017), 324.
  43. ngc-learn. https://github.com/ago109/ngc-learn, 2021.
  44. Nøkland, A. Direct feedback alignment provides learning in deep neural networks. In Advances in Neural Information Processing Systems (2016).
  45. Forward-forward training of an optical neural network. arXiv preprint arXiv:2305.19170 (2023).
  46. Memristor crossbar circuits implementing equilibrium propagation for on-device learning. Micromachines 14, 7 (2023), 1367.
  47. Ororbia, A. Contrastive-signal-dependent plasticity: Forward-forward learning of spiking neural systems. arXiv preprint arXiv:2303.18187 (2023).
  48. Ororbia, A. Spiking neural predictive coding for continually learning from data streams. Neurocomputing 544 (2023), 126292.
  49. Mortal computation: A foundation for biomimetic intelligence. arXiv preprint arXiv:2311.09589 (2023).
  50. A neuro-mimetic realization of the common model of cognition via hebbian learning and free energy minimization. arXiv preprint arXiv:2310.15177 (2023).
  51. The neural coding framework for learning generative models. Nature communications 13, 1 (2022), 2064.
  52. The predictive forward-forward algorithm. arXiv preprint arXiv:2301.01452 (2023).
  53. predictive-forward-forward (code-base), 2023.
  54. Biologically motivated algorithms for propagating local target representations. In Proc. AAAI (2019), vol. 33, pp. 4651–4658.
  55. Backpropagation-free deep learning with recursive local representation alignment. In AAAI (2023), vol. 37.
  56. Predictive coding beyond gaussian distributions. arXiv preprint arXiv:2211.03481 (2022).
  57. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics 378 (2019), 686–707.
  58. Ai in health and medicine. Nature medicine 28, 1 (2022), 31–38.
  59. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience (1999).
  60. Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural computation 13, 10 (2001), 2221–2237.
  61. Brain-inspired computational intelligence via predictive coding. arXiv preprint arXiv:2308.07870 (2023).
  62. Causal inference via predictive coding. arXiv preprint arXiv:2306.15479 (2023).
  63. Learning on arbitrary graph topologies via predictive coding. Advances in neural information processing systems 35 (2022), 38232–38244.
  64. Associative memories via predictive coding. Advances in Neural Information Processing Systems 34 (2021), 3874–3886.
  65. Incremental predictive coding: A parallel and fully automatic learning algorithm. arXiv preprint arXiv:2212.00720 (2022).
  66. Reverse differentiation via predictive coding. In Proceedings of the AAAI Conference on Artificial Intelligence (2022), vol. 36, pp. 8150–8158.
  67. Deep learning with dynamic spiking neurons and fixed feedback weights. Neural computation 29, 3 (2017), 578–602.
  68. Equivalence of equilibrium propagation and recurrent backpropagation. Neural computation 31, 2 (2019), 312–329.
  69. Energy-based learning algorithms for analog computing: a comparative study. In Thirty-seventh Conference on Neural Information Processing Systems (2023).
  70. Generalization of equilibrium propagation to vector field dynamics. arXiv preprint arXiv:1808.04873 (2018).
  71. Schmidhuber, J. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science 1, 4 (1989), 403–412.
  72. Can the brain do backpropagation?—exact implementation of backpropagation in predictive coding networks. Advances in neural information processing systems 33 (2020), 22566–22579.
  73. Inferring neural activity before plasticity as a foundation for learning beyond backpropagation. Nature Neuroscience (2024), 1–11.
  74. Evolving neural networks through augmenting topologies. Evolutionary computation 10, 2 (2002), 99–127.
  75. Predictive codes for forthcoming perception in the frontal cortex. Science 314, 5803 (2006), 1311–1314.
  76. Recurrent predictive coding models for associative memory employing covariance learning. PLoS computational biology 19, 4 (2023), e1010719.
  77. pypc, 2020.
  78. Theories of error back-propagation in the brain. Trends in Cognitive Sciences (2019).
  79. Deep physical neural networks trained with backpropagation. Nature 601, 7894 (2022), 549–555.
  80. Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports (2021).
  81. Equivalence of backpropagation and contrastive hebbian learning in a layered network. Neural computation 15, 2 (2003), 441–454.
  82. A robust backpropagation-free framework for images. Transactions on Machine Learning Research (2023).
  83. Glsnn: A multi-layer spiking neural network based on global feedback alignment and local stdp plasticity. Frontiers in Computational Neuroscience 14 (2020), 576841.
  84. Equilibrium propagation for memristor-based recurrent neural networks. Frontiers in neuroscience 14 (2020), 240.
  85. Beyond backpropagation: bilevel optimization through implicit differentiation and equilibrium propagation. Neural Computation 34, 12 (2022), 2309–2346.
Citations (4)

Summary

  • The paper demonstrates that neuroscience-inspired learning algorithms overcome backpropagation’s biological implausibility with energy-efficient and hardware-compatible credit assignment methods.
  • It details methodologies like predictive coding, contrastive Hebbian learning, and forward-only learning to enable parallel and localized computation similar to biological neural processes.
  • The review outlines future directions such as developing flexible software libraries and dynamic neuromorphic systems to leverage bio-inspired techniques in overcoming digital hardware limitations.

Neuroscience-Inspired Machine Learning: A Review

The paper presents a comprehensive examination of neuroscience-inspired learning algorithms that offer an alternative to the traditional backpropagation (BP) approach in artificial neural networks (ANNs). The discussion begins by highlighting the primary criticism of backpropagation: its biological implausibility. This implausibility has profound implications when considering hardware compatibility and energy efficiency. Unlike BP, biologically plausible credit assignment methods are highly adaptable to different learning conditions, boast compatibility with emerging hardware solutions, and offer potential for real-time adaptive neuromorphic systems.

Critique of Backpropagation

To understand the innovations posed by bio-plausible methods, it's important to recognize the critiques presented against BP. These critiques encompass several issues related to information processing and synaptic weight updates:

  1. Weight Transport (WT): The assumptions underlying BP, especially those involving symmetric forward and backward passes, contrast starkly with the unidirectional synaptic flows in biological neural networks.
  2. Forward/Backward Locking (FL/BL): The sequential dependencies in BP hinder parallel computation, which is a stark contrast to the simultaneous processing characteristic of biological systems.
  3. Forward-Backward Differentiation (FBD): The divergence between computation in the forward and backward passes contradicts real-world synaptic plasticity, which is based on concurrent and local updates.

Neuroscience-Inspired Learning Algorithms

By surveying several key bio-plausible learning algorithms, the paper provides valuable insights into how these challenges can be addressed:

  • Predictive Coding (PC): This method proposes that neuron activities minimize prediction errors using local interactions, a theory bolstered by both computational neuroscience and Bayesian inference.
  • Contrastive Hebbian Learning (CHL): Utilizing an energy-based model, CHL conducts credit assignment through iterative computation phases that produce a marked equilibrium state.
  • Forward-Only Learning (FO): These schemes circumvent feedback mechanisms for credit assignment, using techniques like label propagation in the forward path or adversarial contexts.

These approaches, grounded in neuroscience, have shown promise across various tasks traditionally dominated by BP, achieving comparable performance in fields ranging from computer vision to language processing.

Implications and Future Directions

The implications of successfully integrating these bio-plausible algorithms are multifaceted. Practically, they could revolutionize neuromorphic systems, providing scalable energy-efficient solutions where traditional BP falters. This opens the door to new hardware designs, potentially bypassing the digital intermediary stage and leveraging the physics of the hardware for natural computation. Theoretically, the successes of neuroscience-aligned credit assignment advance our understanding of learning paradigms akin to biological processes.

Looking forward, the paper identifies critical research directions. These include developing flexible, high-level software libraries similar to PyTorch to experiment with bio-plausible methods and enhancing stability and convergence theory for more profound architectural integrations. Moreover, extending research to dynamic environments with time-series data while also considering mortal computation in evolving hardware mimics presents a burgeoning area of interest.

Conclusion

This review lays the groundwork for addressing long-standing criticisms of BP and paves the way for bio-inspired learning algorithms, promising more efficient and plausible hardware compatibility. As such methods mature, they hold the potential to shape the future of machine learning, inspiring advancements from interdisciplinary collaborations that harness the efficiencies and adaptive capacities of biological neural systems. Ultimately, such progress might transcend current digital hardware limitations, particularly in fields demanding robust energy efficiency and parallel processing capabilities.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com