Back to the Continuous Attractor (2408.00109v3)
Abstract: Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
- Population dynamics of head-direction neurons during drift and reorientation. Nature, 615(7954):892–899, 2023.
- Low-dimensional neural manifolds for the control of constrained and unconstrained movements. bioRxiv, pages 2023–05, 2023.
- From fixed points to chaos: Three models of delayed discrimination. Progress in neurobiology, 103:214–222, 2013.
- J. T. Barron. Continuously differentiable exponential linear units. arXiv preprint arXiv:1704.07483, 2017.
- Shaping dynamics with multiple populations in low-rank recurrent networks. Neural Computation, 33(6):1572–1615, 2021.
- Parametric control of flexible timing through low-dimensional neural manifolds. Neuron, 111(5):739–753, 2023.
- Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 92(9):3844–3848, 1995.
- T. Biswas and J. E. Fitzgerald. Geometric framework to predict structure from function in neural networks. Physical review research, 4(2):023255, 2022.
- Predictive coding of dynamical variables in balanced spiking networks. PLoS computational biology, 9(11):e1003258, Nov. 2013. ISSN 1553-734X, 1553-7358. doi: 10.1371/journal.pcbi.1003258.
- A continuous attractor network model without recurrent excitation: Maintenance and integration in the head direction cell system. Journal of computational neuroscience, 18(2):205–227, 2005.
- R. Chaudhuri and I. Fiete. Computational principles of memory. Nature neuroscience, 19(3):394, 2016.
- C. Chicone. Ordinary Differential Equations with Applications. Springer Science & Business Media, Sept. 2006. ISBN 9780387357942.
- Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), 2015.
- Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral cortex, 10(9):910–923, 2000.
- Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks. arXiv preprint, 2019.
- Recurrent neural network models for working memory of continuous variables: Activity manifolds, connectivity patterns, and dynamic codes. arXiv preprint arXiv:2111.01275, 2021.
- P. Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. 2001.
- Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. bioRxiv, pages 2022–08, 2022.
- S. Druckmann and D. B. Chklovskii. Neuronal circuits underlying persistent representations despite time varying activity. Current Biology, 22(22):2095–2103, 2012.
- C. Ehresmann. Les connexions infinitésimales dans un espace fibré différentiable. In Colloque de topologie, Bruxelles, volume 29, pages 55–75, 1950.
- Flexible integration of continuous sensory evidence in perceptual estimation tasks. Proceedings of the National Academy of Sciences, 119(45):e2214441119, 2022.
- A. Fanthomme and R. Monasson. Low-dimensional manifolds support multiplexed integrations in recurrent neural networks. Neural Computation, 33(4):1063–1112, 2021.
- N. Fenichel and J. Moser. Persistence and smoothness of invariant manifolds for flows. Indiana University Mathematics Journal, 21(3):193–226, 1971.
- G. B. Folland. Real analysis: Modern techniques and their applications, volume 40. John Wiley & Sons, 1999.
- K. v. Frisch. The dance language and orientation of bees. Harvard University Press, 1993.
- S. Fusi and L. F. Abbott. Limits on the memory storage capacity of bounded synapses. Nature neuroscience, 10(4):485–493, Apr. 2007. ISSN 1097-6256,1546-1726. doi: 10.1038/nn1859.
- E. Ghazizadeh and S. Ching. Slow manifolds within network dynamics encode working memory efficiently and robustly. PLoS computational biology, 17(9):e1009366, 2021.
- X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. jmlr.org, 2010.
- M. S. Goldman. Memory without feedback in a neural network. Neuron, 61(4):621–634, Feb. 2009. ISSN 0896-6273, 1097-4199. doi: 10.1016/j.neuron.2008.12.012.
- Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cerebral cortex, 13(11):1185–1195, Nov. 2003. ISSN 1047-3211. doi: 10.1093/cercor/bhg095.
- M. Golubitsky and I. Stewart. The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. Number 200 in Progress in Mathematics. Birkhäuser. ISBN 978-3-7643-6609-4.
- Modeling attractor deformation in the rodent head-direction system. Journal of neurophysiology, 83(6):3402–3410, 2000.
- J. Gu and S. Lim. Unsupervised learning for robust working memory. PLoS Computational Biology, 18(5):e1009083, 2022.
- V. Guillemin and A. Pollack. Differential topology, volume 370. American Mathematical Soc., 2010.
- On the impact of the activation function on deep neural networks training. In International conference on machine learning, pages 2672–2680. PMLR, 2019.
- M. W. Hirsch and B. Baird. Computing with dynamic attractors in neural networks. Biosystems, 34(1-3):173–195, 1995.
- Differential equations, dynamical systems, and an introduction to chaos. Academic press, 2013.
- B. K. Hulse and V. Jayaraman. Mechanisms underlying the neural computation of head direction. Annual Review of Neuroscience, 43:31–54, 2020.
- How important are activation functions in regression and classification? a survey, performance comparison, and future directions. Journal of Machine Learning for Modeling and Computing, 4(1), 2023.
- C. K. R. T. Jones. Geometric singular perturbation theory. In L. Arnold, C. K. R. T. Jones, K. Mischaikow, G. Raugel, and R. Johnson, editors, Dynamical Systems: Lectures Given at the 2nd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Montecatini Terme, Italy, June 13–22, 1994, pages 44–118. Springer Berlin Heidelberg, Berlin, Heidelberg, 1995. ISBN 9783540494157. doi: 10.1007/BFb0095239.
- Ring attractor dynamics emerge from a spiking model of the entire protocerebral bridge. Frontiers in behavioral neuroscience, 11:8, 2017.
- M. Khona and I. R. Fiete. Attractor and integrator networks in the brain. Nature reviews. Neuroscience, 23(12):744–766, Dec. 2022. ISSN 1471-003X, 1471-0048. doi: 10.1038/s41583-022-00642-0.
- Generation of stable heading representations in diverse visual scenes. Nature, 576(7785):126–131, 2019.
- J. J. Knierim and K. Zhang. Attractor dynamics of spatially correlated neural activity in the limbic system. Annual review of neuroscience, 35:267–285, 2012.
- Model for a robust neural integrator. Nature neuroscience, 5(8):775–782, Aug. 2002. ISSN 1097-6256. doi: 10.1038/nn893.
- S. Lim and M. S. Goldman. Noise tolerance of attractor and feedforward memory models. Neural computation, 24(2):332–390, Feb. 2012. ISSN 0899-7667, 1530-888X. doi: 10.1162/NECO\_a\_00234.
- S. Lim and M. S. Goldman. Balanced cortical microcircuitry for maintaining information in working memory. Nature neuroscience, 16(9):1306–1314, Sept. 2013. ISSN 1097-6256, 1546-1726. doi: 10.1038/nn.3492.
- R. Mañé. Persistent manifolds are normally hyperbolic. Transactions of the American Mathematical Society, 246:261–283, 1978.
- Universality and individuality in neural dynamics across large populations of recurrent networks. Advances in neural information processing systems, 32, 2019.
- R. Mañé. A proof of the c1superscript𝑐1c^{1}italic_c start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT stability conjecture. Publications Mathématiques de l’IHÉS, 66:161–210, 1987.
- Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
- F. Mastrogiuseppe and S. Ostojic. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron, 99(3):609–623, 2018.
- Diversity of emergent dynamics in competitive threshold-linear networks. SIAM Journal on Applied Dynamical Systems, 23(1):855–884, 2024.
- An approximate line attractor in the hypothalamus encodes an aggressive state. Cell, 186(1):178–193.e15, Jan. 2023. ISSN 0092-8674. doi: 10.1016/j.cell.2022.11.027.
- Accurate angular integration with only a handful of neurons. bioRxiv, 2022. doi: 10.1101/2022.05.23.493052.
- A diverse range of factors affect the nature of neural representations underlying short-term memory. Nature neuroscience, 22(2):275–283, 2019.
- Error-correcting dynamics in visual working memory. Nature communications, 10(1):3366, 2019.
- Persistent learning signals and working memory without continuous attractors. Aug. 2023.
- Automatic differentiation in PyTorch. In NIPS-W, 2017.
- Neural dynamics and architecture of the heading direction circuit in zebrafish. Nature neuroscience, 26(5):765–773, 2023.
- E. Pollock and M. Jazayeri. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS computational biology, 16(8):e1008128, 2020.
- R. Prohens and A. E. Teruel. Canard trajectories in 3D piecewise linear systems. Discrete Contin. Dyn. Syst, 33(3):4595–4611, 2013.
- Slow–fast n-dimensional piecewise linear differential systems. Journal of Differential Equations, 260(2):1865–1892, 2016.
- Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
- A coupled attractor model of the rodent head direction system. Network: Computation in Neural Systems, 7(4):671–685, 1996.
- Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron, 98(5):1005–1019, 2018.
- Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron, 38(3):473–485, May 2003. ISSN 0896-6273. doi: 10.1016/s0896-6273(03)00255-1.
- Neuronal correlates of parametric working memory in the prefrontal cortex. Nature, 399(6735):470–473, June 1999. ISSN 0028-0836. doi: 10.1038/20939.
- A. Samsonovich and B. L. McNaughton. Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience, 17(15):5900–5920, 1997.
- Efficient low-dimensional approximation of continuous attractor networks. arXiv preprint arXiv:1711.08032, 2017.
- H. S. Seung. How the brain keeps the eyes still. Proceedings of the National Academy of Sciences, 93(23):13339–13344, 1996. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.93.23.13339.
- H. S. Seung. Learning continuous attractors in recurrent networks. Advances in neural information processing systems, 10, 1997.
- Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26(1):259–271, Apr. 2000. ISSN 0896-6273. doi: 10.1016/s0896-6273(00)81155-1.
- Computational roles of intrinsic synaptic dynamics. 70:34–42, 2021. ISSN 09594388. doi: 10.1016/j.conb.2021.06.002.
- D. J. Simpson. Dimension reduction for slow-fast, piecewise-smooth, continuous systems of ODEs. arXiv preprint arXiv:1801.04653, 2018.
- Chaos in random neural networks. Physical review letters, 61(3):259, 1988.
- D. Sussillo. Neural circuits as computational dynamical systems. Current opinion in neurobiology, 25:156–163, 2014.
- D. Sussillo and O. Barak. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626–649, 2013.
- J. S. Taube. The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci., 30:181–207, 2007.
- M. Tsodyks and T. Sejnowski. Associative memory and hippocampal place cells. International journal of neural systems, 6:81–86, 1995.
- Angular velocity integration in a fly heading circuit. Elife, 6:e23496, 2017.
- The neuroanatomical ultrastructure and function of a biological ring attractor. Neuron, 108(1):145–163, 2020.
- Learning accurate path integration in ring attractor models of the head direction system. Elife, 11:e69841, 2022.
- Computation through neural population dynamics. Annual review of neuroscience, 43(1):249–275, July 2020. ISSN 0147-006X. doi: 10.1146/annurev-neuro-092619-094115.
- S. Wiggins. Normally hyperbolic invariant manifolds in dynamical systems, volume 105. Springer Science & Business Media, 1994.
- Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nature neuroscience, 17(3):431–439, 2014.
- A brainstem integrator for self-location memory and positional homeostasis in zebrafish. Cell, 185(26):5011–5027, 2022.
- Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience, 22(2):297–306, 2019. doi: 10.1038/s41593-018-0310-2.
- K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience, 16(6):2112–2126, 1996.