Learning arbitrary dynamics in efficient, balanced spiking networks using local plasticity rules (1705.08026v2)
Abstract: Understanding how recurrent neural circuits can learn to implement dynamical systems is a fundamental challenge in neuroscience. The credit assignment problem, i.e. determining the local contribution of each synapse to the network's global output error, is a major obstacle in deriving biologically plausible local learning rules. Moreover, spiking recurrent networks implementing such tasks should not be hugely costly in terms of number of neurons and spikes, as they often are when adapted from rate models. Finally, these networks should be robust to noise and neural deaths in order to sustain these representations in the face of such naturally occurring perturbation. We approach this problem by fusing the theory of efficient, balanced spiking networks (EBN) with nonlinear adaptive control theory. Local learning rules are ensured by feeding back into the network its own error, resulting in a synaptic plasticity rule depending solely on presynaptic inputs and post-synaptic feedback. The spiking efficiency and robustness of the network are guaranteed by maintaining a tight excitatory/inhibitory balance, ensuring that each spike represents a local projection of the global output error and minimizes a loss function. The resulting networks can learn to implement complex dynamics with very small numbers of neurons and spikes, exhibit the same spike train variability as observed experimentally, and are extremely robust to noise and neuronal loss.