Papers
Topics
Authors
Recent
2000 character limit reached

Thalamocortical motor circuit insights for more robust hierarchical control of complex sequences

Published 23 Jun 2020 in cs.LG, cs.NE, q-bio.NC, and stat.ML | (2006.13332v1)

Abstract: We study learning of recurrent neural networks that produce temporal sequences consisting of the concatenation of re-usable "motifs". In the context of neuroscience or robotics, these motifs would be the motor primitives from which complex behavior is generated. Given a known set of motifs, can a new motif be learned without affecting the performance of the known set and then used in new sequences without first explicitly learning every possible transition? Two requirements enable this: (i) parameter updates while learning a new motif do not interfere with the parameters used for the previously acquired ones; and (ii) a new motif can be robustly generated when starting from the network state reached at the end of any of the other motifs, even if that state was not present during training. We meet the first requirement by investigating artificial neural networks (ANNs) with specific architectures, and attempt to meet the second by training them to generate motifs from random initial states. We find that learning of single motifs succeeds but that sequence generation is not robust: transition failures are observed. We then compare these results with a model whose architecture and analytically-tractable dynamics are inspired by the motor thalamocortical circuit, and that includes a specific module used to implement motif transitions. The synaptic weights of this model can be adjusted without requiring stochastic gradient descent (SGD) on the simulated network outputs, and we have asymptotic guarantees that transitions will not fail. Indeed, in simulations, we achieve single-motif accuracy on par with the previously studied ANNs and have improved sequencing robustness with no transition failures. Finally, we show that insights obtained by studying the transition subnetwork of this model can also improve the robustness of transitioning in the traditional ANNs previously studied.

Citations (5)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.