Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-modal Gaussian Process Variational Autoencoders for Neural and Behavioral Data (2310.03111v1)

Published 4 Oct 2023 in cs.LG and q-bio.NC

Abstract: Characterizing the relationship between neural population activity and behavioral data is a central goal of neuroscience. While latent variable models (LVMs) are successful in describing high-dimensional time-series data, they are typically only designed for a single type of data, making it difficult to identify structure shared across different experimental data modalities. Here, we address this shortcoming by proposing an unsupervised LVM which extracts temporally evolving shared and independent latents for distinct, simultaneously recorded experimental modalities. We do this by combining Gaussian Process Factor Analysis (GPFA), an interpretable LVM for neural spiking data with temporally smooth latent space, with Gaussian Process Variational Autoencoders (GP-VAEs), which similarly use a GP prior to characterize correlations in a latent space, but admit rich expressivity due to a deep neural network mapping to observations. We achieve interpretability in our model by partitioning latent variability into components that are either shared between or independent to each modality. We parameterize the latents of our model in the Fourier domain, and show improved latent identification using this approach over standard GP-VAE methods. We validate our model on simulated multi-modal data consisting of Poisson spike counts and MNIST images that scale and rotate smoothly over time. We show that the multi-modal GP-VAE (MM-GPVAE) is able to not only identify the shared and independent latent structure across modalities accurately, but provides good reconstructions of both images and neural rates on held-out trials. Finally, we demonstrate our framework on two real world multi-modal experimental settings: Drosophila whole-brain calcium imaging alongside tracked limb positions, and Manduca sexta spike train measurements from ten wing muscles as the animal tracks a visual stimulus.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Advances in neural information processing systems, 30, 2017.
  2. Variational latent gaussian process for recovering single-trial dynamics from population spike trains. Neural computation, 29(5):1293–1316, 2017.
  3. Towards the neural population doctrine. Current opinion in neurobiology, 55:103–111, 2019.
  4. Gaussian process linking functions for mind, brain, and behavior. Proceedings of the National Academy of Sciences, 117(47):29398–29406, 2020.
  5. Neural latents benchmark’21: evaluating latent variable models of neural population activity. arXiv preprint arXiv:2109.04463, 2021.
  6. Prefrontal cortex exhibits multidimensional dynamic encoding during decision-making. Nature neuroscience, 23(11):1410–1420, 2020.
  7. Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
  8. Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-vae. Advances in Neural Information Processing Systems, 33:7234–7247, 2020.
  9. Nonlinear evolution via spatially-dependent linear dynamics for electrophysiology and calcium data. arXiv preprint arXiv:1811.02459, 2018.
  10. Learnable latent embeddings for joint behavioural and neural analysis. Nature, pages 1–9, 2023.
  11. Targeted neural dynamical modeling. Advances in Neural Information Processing Systems, 34:29379–29392, 2021.
  12. Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nature Neuroscience, 24(1):140–149, 2021.
  13. Neural dynamics underlying birdsong practice and performance. Nature, 599(7886):635–639, 2021.
  14. End-to-end training of deep probabilistic cca on paired biomedical observations. In Uncertainty in artificial intelligence, 2019.
  15. Gacs-korner common information variational autoencoder. arXiv preprint arXiv:2205.12239, 2022.
  16. Multimodal generative models for scalable weakly-supervised learning. Advances in neural information processing systems, 31, 2018.
  17. Variational mixture-of-experts autoencoders for multi-modal deep generative models. Advances in neural information processing systems, 32, 2019.
  18. Multimodal teacher forcing for reconstructing nonlinear dynamical systems. arXiv preprint arXiv:2212.07892, 2022.
  19. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems, 21, 2008.
  20. Temporal alignment and latent gaussian process factor inference in population spike trains. In Advances in Neural Information Processing Systems, pages 10445–10455, 2018.
  21. Identifying signal and noise structure in neural population activity with gaussian process factor models. Advances in neural information processing systems, 33:13795–13805, 2020.
  22. Efficient non-conjugate gaussian process factor models for spike count data using polynomial approximations. In International Conference on Machine Learning, pages 5177–5186. PMLR, 2020.
  23. A probabilistic framework for task-aligned intra-and inter-area neural manifold estimation. arXiv preprint arXiv:2209.02816, 2022.
  24. Gaussian process prior variational autoencoders. Advances in neural information processing systems, 31, 2018.
  25. Gp-vae: Deep probabilistic time series imputation. In International Conference on Artificial Intelligence and Statistics, pages 1651–1661. PMLR, 2020.
  26. Longitudinal variational autoencoder. In International Conference on Artificial Intelligence and Statistics, pages 3898–3906. PMLR, 2021.
  27. Amortised inference in structured generative models with explaining away. arXiv preprint arXiv:2209.05212, 2022.
  28. Christopher J Paciorek. Bayesian smoothing with gaussian processes using fourier basis functions in the spectralgp package. Journal of statistical software, 19(2):nihpa22751, 2007.
  29. Scalable bayesian inference for high-dimensional neural receptive fields. bioRxiv, page 212217, 2017.
  30. Gaussian-process factor analysis for low-d single-trial analysis of neural population activity. In Frontiers in Systems Neuroscience. Conference Abstract: Computational and systems neuroscience, 2009.
  31. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  32. Variational fourier features for gaussian processes. The Journal of Machine Learning Research, 18(1):5537–5588, 2017.
  33. Gaussian process kernels for pattern discovery and extrapolation. In International conference on machine learning, pages 1067–1075. PMLR, 2013.
  34. A general linear-time inference method for gaussian processes on one dimension. The Journal of Machine Learning Research, 22(1):10580–10615, 2021.
  35. Flygenvectors: the spatial and temporal structure of neural activity across the fly brain. bioRxiv, pages 2021–09, 2021.
  36. Real-time volumetric microscopy of in vivo dynamics and large-scale samples with scape 2.0. Nature methods, 16(10):1054–1062, 2019.
  37. Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience, 21(9):1281–1289, 2018.
  38. Semi-supervised sequence modeling for improved behavioral segmentation. bioRxiv, pages 2021–06, 2021.
  39. Direct estimation of firing rates from calcium imaging data. arXiv preprint arXiv:1601.00364, 2016.
  40. Flower tracking in hawkmoths: behavior and energetics. Journal of Experimental Biology, 210(1):37–45, 2007.
  41. Predicting visually-modulated precisely-timed spikes across a coordinated and comprehensive motor program. In 2023 International joint conference on neural networks (IJCNN). IEEE, 2023.
  42. Precise timing is ubiquitous, consistent, and coordinated across a comprehensive, spike-resolved flight motor program. Proceedings of the National Academy of Sciences, 116(52):26951–26960, 2019.
  43. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914–922. PMLR, 2017.
Citations (5)

Summary

We haven't generated a summary for this paper yet.