Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling (2403.01371v4)

Published 3 Mar 2024 in stat.ML and cs.LG

Abstract: State-space graphical models and the variational autoencoder framework provide a principled apparatus for learning dynamical systems from data. State-of-the-art probabilistic approaches are often able to scale to large problems at the cost of flexibility of the variational posterior or expressivity of the dynamics model. However, those consolidations can be detrimental if the ultimate goal is to learn a generative model capable of explaining the spatiotemporal structure of the data and making accurate forecasts. We introduce a low-rank structured variational autoencoding framework for nonlinear Gaussian state-space graphical models capable of capturing dense covariance structures that are important for learning dynamical systems with predictive capabilities. Our inference algorithm exploits the covariance structures that arise naturally from sample based approximate Gaussian message passing and low-rank amortized posterior updates -- effectively performing approximate variational smoothing with time complexity scaling linearly in the state dimensionality. In comparisons with other deep state-space model architectures our approach consistently demonstrates the ability to learn a more predictive generative model. Furthermore, when applied to neural physiological recordings, our approach is able to learn a dynamical system capable of forecasting population spiking and behavioral correlates from a small portion of single trials.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Linear estimation. Prentice Hall, 2000.
  2. Simo Särkkä. Bayesian filtering and smoothing. Cambridge University Press, 2013. ISBN 9781107619289. URL http://www.worldcat.org/isbn/9781107619289.
  3. Brian D. O. Anderson and John B. Moore. Optimal Filtering. Prentice-Hall, Englewood Cliffs, N.J., 1979. ISBN 978-0-13-638122-8.
  4. Auto-Encoding variational bayes. In International Conference on Learning Representation, May 2014. URL http://arxiv.org/abs/1312.6114.
  5. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432, 2016.
  6. Structured inference networks for nonlinear state space models. In AAAI Conference on Artificial Intelligence, 2016. URL https://api.semanticscholar.org/CorpusID:2901305.
  7. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018.
  8. R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for time-series models. In D. Barber, T. Cemgil, and S. Chiappa, editors, Bayesian Time series models, chapter 5, pages 109–130. Cambridge University Press, 2011.
  9. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  10. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pages 1278–1286. PMLR, 2014.
  11. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305, 2008. ISSN 1935-8237. doi: 10.1561/2200000001. URL http://dx.doi.org/10.1561/2200000001.
  12. Conjugate priors for exponential families. The Annals of statistics, pages 269–281, 1979.
  13. Matthias Seeger. Expectation propagation for exponential families. Technical report, University of California at Berkeley, 2005.
  14. Mohammad Khan and Wu Lin. Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 878–887. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/khan17a.html.
  15. Matthew James Beal. Variational algorithms for approximate Bayesian inference. University of London, University College London (United Kingdom), 2003.
  16. Structured vaes: Composing probabilistic graphical models and variational autoencoders. arXiv preprint arXiv:1603.06277, 2:2016, 2016.
  17. Revisiting structured variational autoencoders. In International Conference on Machine Learning, pages 42046–42057. PMLR, 2023.
  18. Spatio-temporal variational Gaussian processes. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
  19. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021.
  20. Fast simulation of hyperplane-truncated multivariate normal distributions. Bayesian Analysis, 12(4):1017–1037, 2017. URL https://projecteuclid.org/euclid.ba/1488337478.
  21. Neural population dynamics during reaching. Nature, 487(7405):51–56, 2012.
  22. Neural latents benchmark ’21: evaluating latent variable models of neural population activity. In Advances in Neural Information Processing Systems (NeurIPS), Track on Datasets and Benchmarks, 2021. URL https://arxiv.org/abs/2109.04463.
  23. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com