Expressive architectures enhance interpretability of dynamics-based neural population models (2212.03771v4)
Abstract: Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN-based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.
- Stevenson IH, Kording KP. How advances in neural recording affect data analysis. Nature Neuroscience 2011 Feb;14(2):139–142.
- Chronically implanted Neuropixels probes enable high-yield recordings in freely moving mice. eLife 2019;8:e47188.
- Fully integrated silicon probes for high-density recording of neural activity. Nature 2017 Nov;551(7679):232–236. https://www.nature.com/articles/nature24636, number: 7679 Publisher: Nature Publishing Group.
- Computation Through Neural Population Dynamics. Annual Review of Neuroscience 2020;43(1):249–275. https://doi.org/10.1146/annurev-neuro-092619-094115, _eprint: https://doi.org/10.1146/annurev-neuro-092619-094115.
- Shenoy KV, Kao JC. Measurement, manipulation and modeling of brain-wide neural population dynamics. Nature Communications 2021 Jan;12(1):633. https://www.nature.com/articles/s41467-020-20371-1, number: 1 Publisher: Nature Publishing Group.
- Khona M, Fiete IR. Attractor and integrator networks in the brain. Nature Reviews Neuroscience 2022;p. 1–23.
- Cortical control of arm movements: a dynamical systems perspective. Annual Review of Neuroscience 2013 Jul;36:337–359.
- Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. arXiv; 2021.
- Sussillo D, Barak O. Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation 2013 Mar;25(3):626–649.
- Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. bioRxiv 2022;.
- Bayesian Learning and Inference in Recurrent Switching Linear Dynamical Systems. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics PMLR; 2017. p. 914–922. https://proceedings.mlr.press/v54/linderman17a.html, iSSN: 2640-3498.
- LFADS - Latent Factor Analysis via Dynamical Systems. arXiv; 2016.
- iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data. bioRxiv; 2021.
- Inferring latent dynamics underlying neural population activity via neural differential equations. In: International Conference on Machine Learning PMLR; 2021. p. 5551–5561.
- Building population models for large-scale neural recordings: Opportunities and pitfalls. Current Opinion in Neurobiology 2021;70:64–73.
- Duncker L, Sahani M. Dynamics on the manifold: Identifying computational dynamical activity from neural population recordings. Current Opinion in Neurobiology 2021 Oct;70:163–170. https://www.sciencedirect.com/science/article/pii/S0959438821001264.
- Inferring single-trial neural population dynamics using sequential auto-encoders. Nature Methods 2018 Oct;15(10):805–815. https://www.nature.com/articles/s41592-018-0109-9, number: 10 Publisher: Nature Publishing Group.
- A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nature Methods 2022;19(12).
- A deep learning framework for inference of single-trial neural population activity from calcium imaging with sub-frame temporal resolution. Nature Neuroscience 2022;19(12).
- Kao JC. Considerations in using recurrent neural networks to probe neural dynamics. Journal of Neurophysiology 2019;122(6):2504–2521.
- Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems 2008;21.
- Neural Ordinary Differential Equations. arXiv; 2019.
- Neural Latents Benchmark ’21: Evaluating latent variable models of neural population activity. arXiv; 2022.
- Empirical models of spiking in neural populations. Advances in neural information processing systems 2011;24.
- Linear dynamical neural population models through nonlinear embeddings. arXiv; 2016.
- Dynamical segmentation of single trials from population neural data. Advances in neural information processing systems 2011;24.
- Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences 2016;113(15):3932–3937.
- Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 2019;116(45):22445–22451.
- Discovering governing equations from partial measurements with deep delay autoencoders. arXiv preprint arXiv:220105136 2022;.
- Neuronal dynamics regulating brain and behavioral state transitions. Cell 2019;177(4):970–985.
- Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems. arXiv; 2021.
- Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics. arXiv; 2019.
- Universality and individuality in neural dynamics across large populations of recurrent networks. arXiv; 2019.
- beta-vae: Learning basic visual concepts with a constrained variational framework. In: International Conference on Learning Representations; 2017. .
- Identifiable Deep Generative Models via Sparse Decoding. Transactions on Machine Learning Research 2023 Jan;https://openreview.net/forum?id=vd0onGWZbE.
- Learnable latent embeddings for joint behavioral and neural analysis. CoRR 2022;abs/2204.00673. https://arxiv.org/abs/2204.00673.
- Tsitouras C. Runge–Kutta pairs of order 5 (4) satisfying only the first column simplifying assumption. Computers & Mathematics with Applications 2011;62(2):770–775.
- Occurence of strange attractors in three-dimensional Volterra equations. Physics Letters A 1980 Oct;79(4):259–263. https://www.sciencedirect.com/science/article/pii/0375960180903424.
- Glass L, Mackey MC. Pathological conditions resulting from instabilities in physiological control systems. Annals of the New York Academy of Sciences 1979;316(1):214–235.
- Mac Arthur R. Species packing, and what competition minimizes. Proceedings of the National Academy of Sciences 1969;64(4):1369–1371.
- Huisman J, Weissing FJ. Biodiversity of plankton by species oscillations and chaos. Nature 1999;402(6760):407–410.
- Golub MD, Sussillo D. FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks. Journal of Open Source Software 2018;3(31):1003. https://doi.org/10.21105/joss.01003.
- Rössler OE. An equation for continuous chaos. Physics Letters A 1976 Jul;57(5):397–398. https://www.sciencedirect.com/science/article/pii/0375960176901018.
- Lorenz EN. Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences 1963 Mar;20(2):130–141. https://journals.ametsoc.org/view/journals/atsc/20/2/1520-0469_1963_020_0130_dnf_2_0_co_2.xml, publisher: American Meteorological Society Section: Journal of the Atmospheric Sciences.
- Gilpin W. Chaos as an interpretable benchmark for forecasting and data-driven modelling. Advances in Neural Information Processing Systems 2021;http://arxiv.org/abs/2110.05266.
- Discrete attractor dynamics underlies persistent activity in the frontal cortex. Nature 2019 Feb;566(7743):212–217. https://www.nature.com/articles/s41586-019-0919-7, number: 7743 Publisher: Nature Publishing Group.
- Attractor dynamics gate cortical information flow during decision-making. Nature Neuroscience 2021 Jun;24(6). https://www.nature.com/articles/s41593-021-00840-6.
- PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv; 2019.
- Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:180705118 2018;.
- TorchDyn: A neural differential equations library. arXiv preprint arXiv:200909346 2020;.