Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
127 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
53 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Back to the Continuous Attractor (2408.00109v3)

Published 31 Jul 2024 in q-bio.NC, cs.NE, and nlin.AO

Abstract: Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (89)
  1. Population dynamics of head-direction neurons during drift and reorientation. Nature, 615(7954):892–899, 2023.
  2. Low-dimensional neural manifolds for the control of constrained and unconstrained movements. bioRxiv, pages 2023–05, 2023.
  3. From fixed points to chaos: Three models of delayed discrimination. Progress in neurobiology, 103:214–222, 2013.
  4. J. T. Barron. Continuously differentiable exponential linear units. arXiv preprint arXiv:1704.07483, 2017.
  5. Shaping dynamics with multiple populations in low-rank recurrent networks. Neural Computation, 33(6):1572–1615, 2021.
  6. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron, 111(5):739–753, 2023.
  7. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 92(9):3844–3848, 1995.
  8. T. Biswas and J. E. Fitzgerald. Geometric framework to predict structure from function in neural networks. Physical review research, 4(2):023255, 2022.
  9. Predictive coding of dynamical variables in balanced spiking networks. PLoS computational biology, 9(11):e1003258, Nov. 2013. ISSN 1553-734X, 1553-7358. doi: 10.1371/journal.pcbi.1003258.
  10. A continuous attractor network model without recurrent excitation: Maintenance and integration in the head direction cell system. Journal of computational neuroscience, 18(2):205–227, 2005.
  11. R. Chaudhuri and I. Fiete. Computational principles of memory. Nature neuroscience, 19(3):394, 2016.
  12. C. Chicone. Ordinary Differential Equations with Applications. Springer Science & Business Media, Sept. 2006. ISBN 9780387357942.
  13. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), 2015.
  14. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral cortex, 10(9):910–923, 2000.
  15. Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks. arXiv preprint, 2019.
  16. Recurrent neural network models for working memory of continuous variables: Activity manifolds, connectivity patterns, and dynamic codes. arXiv preprint arXiv:2111.01275, 2021.
  17. P. Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. 2001.
  18. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. bioRxiv, pages 2022–08, 2022.
  19. S. Druckmann and D. B. Chklovskii. Neuronal circuits underlying persistent representations despite time varying activity. Current Biology, 22(22):2095–2103, 2012.
  20. C. Ehresmann. Les connexions infinitésimales dans un espace fibré différentiable. In Colloque de topologie, Bruxelles, volume 29, pages 55–75, 1950.
  21. Flexible integration of continuous sensory evidence in perceptual estimation tasks. Proceedings of the National Academy of Sciences, 119(45):e2214441119, 2022.
  22. A. Fanthomme and R. Monasson. Low-dimensional manifolds support multiplexed integrations in recurrent neural networks. Neural Computation, 33(4):1063–1112, 2021.
  23. N. Fenichel and J. Moser. Persistence and smoothness of invariant manifolds for flows. Indiana University Mathematics Journal, 21(3):193–226, 1971.
  24. G. B. Folland. Real analysis: Modern techniques and their applications, volume 40. John Wiley & Sons, 1999.
  25. K. v. Frisch. The dance language and orientation of bees. Harvard University Press, 1993.
  26. S. Fusi and L. F. Abbott. Limits on the memory storage capacity of bounded synapses. Nature neuroscience, 10(4):485–493, Apr. 2007. ISSN 1097-6256,1546-1726. doi: 10.1038/nn1859.
  27. E. Ghazizadeh and S. Ching. Slow manifolds within network dynamics encode working memory efficiently and robustly. PLoS computational biology, 17(9):e1009366, 2021.
  28. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. jmlr.org, 2010.
  29. M. S. Goldman. Memory without feedback in a neural network. Neuron, 61(4):621–634, Feb. 2009. ISSN 0896-6273, 1097-4199. doi: 10.1016/j.neuron.2008.12.012.
  30. Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cerebral cortex, 13(11):1185–1195, Nov. 2003. ISSN 1047-3211. doi: 10.1093/cercor/bhg095.
  31. M. Golubitsky and I. Stewart. The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. Number 200 in Progress in Mathematics. Birkhäuser. ISBN 978-3-7643-6609-4.
  32. Modeling attractor deformation in the rodent head-direction system. Journal of neurophysiology, 83(6):3402–3410, 2000.
  33. J. Gu and S. Lim. Unsupervised learning for robust working memory. PLoS Computational Biology, 18(5):e1009083, 2022.
  34. V. Guillemin and A. Pollack. Differential topology, volume 370. American Mathematical Soc., 2010.
  35. On the impact of the activation function on deep neural networks training. In International conference on machine learning, pages 2672–2680. PMLR, 2019.
  36. M. W. Hirsch and B. Baird. Computing with dynamic attractors in neural networks. Biosystems, 34(1-3):173–195, 1995.
  37. Differential equations, dynamical systems, and an introduction to chaos. Academic press, 2013.
  38. B. K. Hulse and V. Jayaraman. Mechanisms underlying the neural computation of head direction. Annual Review of Neuroscience, 43:31–54, 2020.
  39. How important are activation functions in regression and classification? a survey, performance comparison, and future directions. Journal of Machine Learning for Modeling and Computing, 4(1), 2023.
  40. C. K. R. T. Jones. Geometric singular perturbation theory. In L. Arnold, C. K. R. T. Jones, K. Mischaikow, G. Raugel, and R. Johnson, editors, Dynamical Systems: Lectures Given at the 2nd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Montecatini Terme, Italy, June 13–22, 1994, pages 44–118. Springer Berlin Heidelberg, Berlin, Heidelberg, 1995. ISBN 9783540494157. doi: 10.1007/BFb0095239.
  41. Ring attractor dynamics emerge from a spiking model of the entire protocerebral bridge. Frontiers in behavioral neuroscience, 11:8, 2017.
  42. M. Khona and I. R. Fiete. Attractor and integrator networks in the brain. Nature reviews. Neuroscience, 23(12):744–766, Dec. 2022. ISSN 1471-003X, 1471-0048. doi: 10.1038/s41583-022-00642-0.
  43. Generation of stable heading representations in diverse visual scenes. Nature, 576(7785):126–131, 2019.
  44. J. J. Knierim and K. Zhang. Attractor dynamics of spatially correlated neural activity in the limbic system. Annual review of neuroscience, 35:267–285, 2012.
  45. Model for a robust neural integrator. Nature neuroscience, 5(8):775–782, Aug. 2002. ISSN 1097-6256. doi: 10.1038/nn893.
  46. S. Lim and M. S. Goldman. Noise tolerance of attractor and feedforward memory models. Neural computation, 24(2):332–390, Feb. 2012. ISSN 0899-7667, 1530-888X. doi: 10.1162/NECO\_a\_00234.
  47. S. Lim and M. S. Goldman. Balanced cortical microcircuitry for maintaining information in working memory. Nature neuroscience, 16(9):1306–1314, Sept. 2013. ISSN 1097-6256, 1546-1726. doi: 10.1038/nn.3492.
  48. R. Mañé. Persistent manifolds are normally hyperbolic. Transactions of the American Mathematical Society, 246:261–283, 1978.
  49. Universality and individuality in neural dynamics across large populations of recurrent networks. Advances in neural information processing systems, 32, 2019.
  50. R. Mañé. A proof of the c1superscript𝑐1c^{1}italic_c start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT stability conjecture. Publications Mathématiques de l’IHÉS, 66:161–210, 1987.
  51. Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
  52. F. Mastrogiuseppe and S. Ostojic. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron, 99(3):609–623, 2018.
  53. Diversity of emergent dynamics in competitive threshold-linear networks. SIAM Journal on Applied Dynamical Systems, 23(1):855–884, 2024.
  54. An approximate line attractor in the hypothalamus encodes an aggressive state. Cell, 186(1):178–193.e15, Jan. 2023. ISSN 0092-8674. doi: 10.1016/j.cell.2022.11.027.
  55. Accurate angular integration with only a handful of neurons. bioRxiv, 2022. doi: 10.1101/2022.05.23.493052.
  56. A diverse range of factors affect the nature of neural representations underlying short-term memory. Nature neuroscience, 22(2):275–283, 2019.
  57. Error-correcting dynamics in visual working memory. Nature communications, 10(1):3366, 2019.
  58. Persistent learning signals and working memory without continuous attractors. Aug. 2023.
  59. Automatic differentiation in PyTorch. In NIPS-W, 2017.
  60. Neural dynamics and architecture of the heading direction circuit in zebrafish. Nature neuroscience, 26(5):765–773, 2023.
  61. E. Pollock and M. Jazayeri. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS computational biology, 16(8):e1008128, 2020.
  62. R. Prohens and A. E. Teruel. Canard trajectories in 3D piecewise linear systems. Discrete Contin. Dyn. Syst, 33(3):4595–4611, 2013.
  63. Slow–fast n-dimensional piecewise linear differential systems. Journal of Differential Equations, 260(2):1865–1892, 2016.
  64. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
  65. A coupled attractor model of the rodent head direction system. Network: Computation in Neural Systems, 7(4):671–685, 1996.
  66. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron, 98(5):1005–1019, 2018.
  67. Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron, 38(3):473–485, May 2003. ISSN 0896-6273. doi: 10.1016/s0896-6273(03)00255-1.
  68. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature, 399(6735):470–473, June 1999. ISSN 0028-0836. doi: 10.1038/20939.
  69. A. Samsonovich and B. L. McNaughton. Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience, 17(15):5900–5920, 1997.
  70. Efficient low-dimensional approximation of continuous attractor networks. arXiv preprint arXiv:1711.08032, 2017.
  71. H. S. Seung. How the brain keeps the eyes still. Proceedings of the National Academy of Sciences, 93(23):13339–13344, 1996. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.93.23.13339.
  72. H. S. Seung. Learning continuous attractors in recurrent networks. Advances in neural information processing systems, 10, 1997.
  73. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26(1):259–271, Apr. 2000. ISSN 0896-6273. doi: 10.1016/s0896-6273(00)81155-1.
  74. Computational roles of intrinsic synaptic dynamics. 70:34–42, 2021. ISSN 09594388. doi: 10.1016/j.conb.2021.06.002.
  75. D. J. Simpson. Dimension reduction for slow-fast, piecewise-smooth, continuous systems of ODEs. arXiv preprint arXiv:1801.04653, 2018.
  76. Chaos in random neural networks. Physical review letters, 61(3):259, 1988.
  77. D. Sussillo. Neural circuits as computational dynamical systems. Current opinion in neurobiology, 25:156–163, 2014.
  78. D. Sussillo and O. Barak. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626–649, 2013.
  79. J. S. Taube. The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci., 30:181–207, 2007.
  80. M. Tsodyks and T. Sejnowski. Associative memory and hippocampal place cells. International journal of neural systems, 6:81–86, 1995.
  81. Angular velocity integration in a fly heading circuit. Elife, 6:e23496, 2017.
  82. The neuroanatomical ultrastructure and function of a biological ring attractor. Neuron, 108(1):145–163, 2020.
  83. Learning accurate path integration in ring attractor models of the head direction system. Elife, 11:e69841, 2022.
  84. Computation through neural population dynamics. Annual review of neuroscience, 43(1):249–275, July 2020. ISSN 0147-006X. doi: 10.1146/annurev-neuro-092619-094115.
  85. S. Wiggins. Normally hyperbolic invariant manifolds in dynamical systems, volume 105. Springer Science & Business Media, 1994.
  86. Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nature neuroscience, 17(3):431–439, 2014.
  87. A brainstem integrator for self-location memory and positional homeostasis in zebrafish. Cell, 185(26):5011–5027, 2022.
  88. Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience, 22(2):297–306, 2019. doi: 10.1038/s41593-018-0310-2.
  89. K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience, 16(6):2112–2126, 1996.
Citations (2)

Summary

  • The paper develops a theory using persistent manifolds to show how continuous attractors approximate analog memory even when perturbed.
  • It employs empirical bifurcation analyses and phase portraits to illustrate slow manifold formation in various neural models.
  • Numerical experiments with task-optimized RNNs demonstrate that invariant slow manifolds support robust analog memory in dynamic systems.

Overview of "Back to the Continuous Attractor"

The paper "Back to the Continuous Attractor" provides a detailed analysis of continuous attractors, their theoretical underpinnings, instabilities, and potential applications within the realms of both theoretical and practical neuroscience. Continuous attractors are fundamental constructs used to model the storage of continuous-valued information in neural systems through recurrent dynamics. Such mechanisms are critical for various biological functions, including the maintenance of continuous variables such as eye position, head direction, and sensory evidence.

The Problem with Continuous Attractors

Continuous attractors, while theoretically robust, suffer from severe structural instability that limits their practical utility in real-world biological contexts. Small perturbations to the dynamical laws governing these attractors often lead to their destruction, posing significant challenges for their reliable implementation in neural systems.

In biological systems, recurrent dynamics are subject to constant perturbations due to factors such as synaptic plasticity and spontaneous fluctuations in synaptic weights. Without additional stabilizing mechanisms, these perturbations can compromise the continuity of fixed points necessary for maintaining continuous-valued memories. This inherent brittleness, often referred to as the "fine-tuning problem," undermines their feasibility as biological models for memory maintenance over finite time scales.

Contributions

Persistent Manifolds and Theoretical Explanation

One of the central contributions of this paper is the development of a theory based on persistent manifolds to explain the behaviors and approximations of continuous attractors under perturbations. By utilizing a fast-slow decomposition analysis, the authors uncover the slow manifolds that persist despite the destructive nature of bifurcations. The results indicate that while the finite-time behaviors of these perturbed systems are similar to perfect continuous attractors, their asymptotic behaviors display considerable differences.

This theoretical framework builds on Fenichel's Persistence Theorem and reaffirms that approximate continuous attractors with predicted slow manifold structures can exist, particularly in trained recurrent neural networks (RNNs). The persistent manifold, or "slow manifold," remains functionally robust, revealing that continuous attractors, while imperfect, still serve as valuable analogies for understanding analog memory mechanisms.

Empirical Analysis through Bifurcation Studies

The paper also provides a comprehensive empirical analysis of bifurcations from continuous attractors in various theoretical models. Detailed bifurcation diagrams and phase portraits elucidate the different forms of bifurcated systems when subjected to small parametric perturbations. For instance, by examining models such as bounded line attractors and ring attractors, the authors demonstrate the formation of slow manifolds post-bifurcation, emphasizing that these manifolds retain approximations of the original continuous attractors.

Numerical Experiments with RNNs

To substantiate their theoretical claims, the authors conducted numerical experiments using task-optimized RNNs. They trained RNNs on analog memory tasks and analyzed the resulting dynamics to identify approximate continuous attractors. These task-trained RNNs displayed invariant manifolds with topologies corresponding to the task-specific memory requirements, such as rings for circular variables.

The analysis revealed significant variation in the topologies of the networks' solutions, yet all solutions featured attractive slow invariant manifolds. This indicates a close relationship between the theoretical predictions and the practical implementations of analog memory in neural networks. Furthermore, the paper emphasizes the role of the uniform norm of the vector field on these slow manifolds as a measure of the systems' proximity to continuous attractors and their generalization capabilities over time.

Implications and Future Directions

The findings of this paper have far-reaching implications for both theoretical neuroscience and the practical implementation of neural memory systems. By establishing a clearer understanding of the robust characteristics of continuous attractors, even under perturbation, the authors provide a foundation for developing more resilient neural models.

On a theoretical level, the persistent manifold theory contributes to the broader understanding of dynamical systems and their application to neural computation. Practically, the insights gained from this research could inform the design of more robust artificial neural networks capable of maintaining analog memory through slow manifolds, even in the presence of noise and perturbations.

Future work could extend these insights to more complex neural architectures and explore potential applications in artificial intelligence where continuous memory representations are essential. Additionally, investigating the interplay between noise tolerance and memory performance in biological systems can offer deeper insights into how these mechanisms evolve and maintain functionality in the brain.

The paper "Back to the Continuous Attractor" thus provides a comprehensive examination of continuous attractors, offering both a theoretical framework and empirical evidence to understand their robustness and practical utility. The paper's insights are crucial for advancing the understanding of memory mechanisms in neural systems and have significant implications for future developments in both neuroscience and artificial intelligence.