Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics (2206.02972v2)

Published 7 Jun 2022 in stat.ML, cs.LG, eess.SP, q-bio.NC, and stat.AP

Abstract: Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data as a sparse combination of simpler, more interpretable components. Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time. The decomposed nature of the dynamics is more expressive than previous switched approaches for a given number of parameters and enables modeling of overlapping and non-stationary dynamics. In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system, learn efficient representations, and capture smooth transitions between dynamical modes, focusing on intuitive low-dimensional non-stationary linear and nonlinear systems. Furthermore, we highlight our model's ability to efficiently capture and demix population dynamics generated from multiple independent subnetworks, a task that is computationally impractical for switched models. Finally, we apply our model to neural "full brain" recordings of C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. On State Estimation in Switching Environments. IEEE Transactions on Automatic Control, AC-15(1):10–17, 1970. ISSN 15582523. doi: 10.1109/TAC.1970.1099359.
  2. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11):4311–4322, 2006.
  3. The isomap algorithm and topological stability. Science, 295(5552):7–7, 2002.
  4. Estimation and tracking. Artech House, Boston, MA, 1990.
  5. Sparse-coding variational auto-encoders. bioRxiv, page 399246, 2018.
  6. Templates for convex cone problems with applications to sparse signal recovery. Mathematical programming computation, 3(3):165–218, 2011.
  7. Rapid fluctuations in functional connectivity of cortical networks encode spontaneous behavior. bioRxiv, 2021. doi: 10.1101/2021.08.15.456390.
  8. C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, 2005.
  9. Variational inference: A review for statisticians. arXiv preprint arXiv:1601.00670, 2016.
  10. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016.
  11. Chaw-Bing (MIT Lincoln Laboratory) Chang and Michael (MIT Electronic Systems Laboratory) Athans. State estimation for discrete systems. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 14(3):418–425, 1978. ISSN 02755823. doi: 10.5711/1082598316323.
  12. Dynamic filtering of time-varying sparse signals via ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. IEEE Transactions on Signal Processing, 64(21):5644–5656, 2016.
  13. Neural population dynamics during reaching. Nature, 487(7405):51–56, 2012.
  14. Representing closed transformation paths in encoded network latent space. 34:3666–3675, April 2020. doi: 10.1609/aaai.v34i04.5775. URL https://ojs.aaai.org/index.php/AAAI/article/view/5775.
  15. Variational autoencoder with learned latent structure. In Arindam Banerjee and Kenji Fukumizu, editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2359–2367. PMLR, April 2021a. URL https://proceedings.mlr.press/v130/connor21a.html.
  16. Learning identity-preserving transformations on data manifolds, 2021b. URL https://arxiv.org/abs/2106.12096.
  17. Distinguishing discrete and continuous behavioral variability using warped autoregressive hmms. Advances in Neural Information Processing Systems, 35:23838–23850, 2022.
  18. Learning transport operators for image manifolds. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 22. Curran Associates, Inc., 2009. URL https://proceedings.neurips.cc/paper/2009/file/a1d50185e7426cbb0acad1e6ca74b9aa-Paper.pdf.
  19. Dimensionality reduction for large-scale neural recordings. Nature neuroscience, 17(11):1500–1509, 2014.
  20. Theoretical neuroscience: computational and mathematical modeling of neural systems. 2001.
  21. High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy. Nature Methods, 18(9):1103–1111, 2021.
  22. On the expressivity of neural networks for deep reinforcement learning. In International conference on machine learning, pages 2627–2637. PMLR, 2020.
  23. Variational sparse coding with learned thresholding. arXiv preprint arXiv:2205.03665, 2022.
  24. Richard FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal, 1(6):445–466, 1961.
  25. Nonparametric bayesian learning of switching linear dynamical systems. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008. URL https://proceedings.neurips.cc/paper/2008/file/950a4152c2b4aa3ad78bdd6b366cc179-Paper.pdf.
  26. Neural manifolds for the control of movement. Neuron, 94(5):978–984, 2017.
  27. Switching state-space models. Technical report, King’s College Road, Toronto M5S 3H5, 1996.
  28. Factorial hidden markov models. Advances in Neural Information Processing Systems, 8, 1995.
  29. Recurrent switching dynamical systems models for multiple interacting neural populations. Advances in neural information processing systems, 33:14867–14878, 2020.
  30. Learning an internal dynamics model from control demonstration. In International Conference on Machine Learning, pages 606–614. PMLR, 2013.
  31. James D. Hamilton. Analysis of time series subject to changes in regime. Journal of Econometrics, 45(1-2):39–70, 1990. ISSN 03044076. doi: 10.1016/0304-4076(90)90093-9.
  32. Variational autoencoder: An unsupervised model for encoding and decoding fmri activity in visual cortex. NeuroImage, 198:125–136, 2019.
  33. Time-varying autoregression with low-rank tensors. SIAM Journal on Applied Dynamical Systems, 20(4):2335–2358, 2021.
  34. Simon Haykin. Adaptive Filter Theory (3rd Ed.). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1996. ISBN 0-13-322760-X.
  35. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology, 117(4):500, 1952.
  36. Global Brain Dynamics Embed the Motor Command Sequence of Caenorhabditis elegans. Cell, 163(3):656–669, 2015. ISSN 10974172. doi: 10.1016/j.cell.2015.09.034.
  37. Enabling hyperparameter optimization in sequential autoencoders for spiking neural data. Advances in Neural Information Processing Systems, 32, 2019.
  38. A mechanistic multi-area recurrent network model of decision-making. Advances in Neural Information Processing Systems, 34, 2021.
  39. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914–922. PMLR, 2017.
  40. SSM: Bayesian Learning and Inference for State Space Models (version 0.0.1, 10 2020. URL https://github.com/lindermanlab/ssm.
  41. Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. bioRxiv, page 621540, 2019. doi: 10.1101/621540. URL https://www.biorxiv.org/content/10.1101/621540v1.abstract?%3Fcollection=.
  42. Zachary C Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31–57, 2018.
  43. Linear state-space model with time-varying dynamics. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part II 14, pages 338–353. Springer, 2014.
  44. Hierarchical coupled-geometry analysis for neuronal structure and activity pattern discovery. IEEE Journal of Selected Topics in Signal Processing, 10(7):1238–1253, 2016. doi: 10.1109/JSTSP.2016.2602061.
  45. Methods for interpreting and understanding deep neural networks. Digital signal processing, 73:1–15, 2018.
  46. K. P. Murphy. Switching Kalman filters. Technical report, Technical Report, UC Berkeley, 1998.
  47. Tree-structured recurrent switching linear dynamical systems for multi-scale modeling. arXiv preprint arXiv:1811.12386, 2018.
  48. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, pages 355–368. Kluwer Academic Publishers, 1998.
  49. Geometry of abstract learned knowledge in the hippocampus. Nature, 595(7865):80–84, 2021.
  50. B. A. Olshausen and D. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(13):607–609, June 1996.
  51. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018.
  52. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  53. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995–999, 2008.
  54. Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142–161, 2016.
  55. On the expressive power of deep neural networks. In international conference on machine learning, pages 2847–2854. PMLR, 2017.
  56. Pylops—a linear-operator python library for scalable algebra and optimization. SoftwareX, 11:100361, 2020.
  57. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000.
  58. Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nature Neuroscience, 24(1):140–149, 2021a.
  59. Where is all the nonlinearity: flexible nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks. bioRxiv, 2021b.
  60. Towards the neural population doctrine. Current opinion in neurobiology, 55:103–111, 2019.
  61. Different scaling of linear models and deep learning in ukbiobank brain images versus machine-learning datasets. Nature communications, 11(1):1–15, 2020.
  62. Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science, 372(6539):eabf4588, 2021.
  63. A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7):1025–1033, 2015.
  64. Computation through neural population dynamics. Annual Review of Neuroscience, 43:249–275, 2020. ISSN 15454126. doi: 10.1146/annurev-neuro-092619-094115.
  65. Flexible timing by temporal scaling of cortical responses. Nature Neuroscience 2017 21:1, 21:102–110, 12 2017. ISSN 1546-1726. doi: 10.1038/s41593-017-0028-6. URL https://www.nature.com/articles/s41593-017-0028-6.
  66. Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Advances in neural information processing systems, 30, 2017.
  67. Mixture of trajectory models for neural decoding of goal-directed movements. Journal of neurophysiology, 97(5):3763–3780, 2007.
  68. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems, 21, 2008.
  69. Manuel Zimmer. Kato2015 whole brain imaging data, January 2021. URL osf.io/2395t.
Citations (14)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com