2000 character limit reached
A biologically plausible neural network for local supervision in cortical microcircuits (2011.15031v1)
Published 30 Nov 2020 in cs.NE, cs.LG, and q-bio.NC
Abstract: The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function. Here, in the context of a two-layer network, we derive an algorithm for training a neural network which avoids this problem by not requiring explicit error computation and backpropagation. Furthermore, our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex. We find that our algorithm empirically performs comparably to backprop on a number of datasets.