Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics Alignment with Diffusion Models (2306.06138v2)
Abstract: In the field of behavior-related brain computation, it is necessary to align raw neural signals against the drastic domain shift among them. A foundational framework within neuroscience research posits that trial-based neural population activities rely on low-dimensional latent dynamics, thus focusing on the latter greatly facilitates the alignment procedure. Despite this field's progress, existing methods ignore the intrinsic spatio-temporal structure during the alignment phase. Hence, their solutions usually lead to poor quality in latent dynamics structures and overall performance. To tackle this problem, we propose an alignment method ERDiff, which leverages the expressivity of the diffusion model to preserve the spatio-temporal structure of latent dynamics. Specifically, the latent dynamics structures of the source domain are first extracted by a diffusion model. Then, under the guidance of this diffusion model, such structures are well-recovered through a maximum likelihood alignment procedure in the target domain. We first demonstrate the effectiveness of our proposed method on a synthetic dataset. Then, when applied to neural recordings from the non-human primate motor cortex, under both cross-day and inter-subject settings, our method consistently manifests its capability of preserving the spatiotemporal structure of latent dynamics and outperforms existing approaches in alignment goodness-of-fit and neural decoding performance.
- F. Briggs, G. R. Mangun, and W. M. Usrey, “Attention enhances synaptic efficacy and the signal-to-noise ratio in neural circuits,” Nature, vol. 499, no. 7459, pp. 476–480, 2013.
- A. Fourtounas and S. J. Thomas, “Cognitive factors predicting checking, procrastination and other maladaptive behaviours: prospective versus inhibitory intolerance of uncertainty,” Journal of Obsessive-Compulsive and Related Disorders, vol. 9, pp. 30–35, 2016.
- J. R. Manning, J. Jacobs, I. Fried, and M. J. Kahana, “Broadband shifts in local field potential power spectra are correlated with single-neuron spiking in humans,” Journal of Neuroscience, vol. 29, no. 43, pp. 13613–13620, 2009.
- J. Jude, M. G. Perich, L. E. Miller, and M. H. Hennig, “Robust alignment of cross-session recordings of neural population activity by behaviour via unsupervised domain adaptation,” arXiv preprint arXiv:2202.06159, 2022.
- Z. Wan, R. Yang, M. Huang, N. Zeng, and X. Liu, “A review on transfer learning in eeg signal analysis,” Neurocomputing, vol. 421, pp. 1–14, 2021.
- S. Niu, Y. Liu, J. Wang, and H. Song, “A decade survey of transfer learning (2010–2020),” IEEE Transactions on Artificial Intelligence, vol. 1, no. 2, pp. 151–166, 2020.
- L. Hu, A. Mouraux, Y. Hu, and G. D. Iannetti, “A novel approach for enhancing the signal-to-noise ratio and detecting automatically event-related potentials (erps) in single trials,” Neuroimage, vol. 50, no. 1, pp. 99–111, 2010.
- J. A. Gallego, M. G. Perich, R. H. Chowdhury, S. A. Solla, and L. E. Miller, “Long-term stability of cortical population dynamics underlying consistent behavior,” Nature neuroscience, vol. 23, no. 2, pp. 260–270, 2020.
- J. A. Gallego, M. G. Perich, R. H. Chowdhury, S. A. Solla, and L. E. Miller, “A stable, long-term cortical signature underlying consistent behavior,” BioRxiv, p. 447441, 2018.
- M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V. Shenoy, “Neural population dynamics during reaching,” Nature, vol. 487, no. 7405, pp. 51–56, 2012.
- H. Sohn, D. Narain, N. Meirhaeghe, and M. Jazayeri, “Bayesian computation through cortical latent dynamics,” Neuron, vol. 103, no. 5, pp. 934–947, 2019.
- S. Saxena, A. A. Russo, J. Cunningham, and M. M. Churchland, “Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity,” Elife, vol. 11, p. e67620, 2022.
- R. Liu, M. Azabou, M. Dabagia, C.-H. Lin, M. Gheshlaghi Azar, K. Hengen, M. Valko, and E. Dyer, “Drop, swap, and generate: A self-supervised approach for generating neural activity,” Advances in neural information processing systems, vol. 34, pp. 10587–10599, 2021.
- J. A. Gallego, M. G. Perich, S. N. Naufel, C. Ethier, S. A. Solla, and L. E. Miller, “Cortical population activity within a preserved neural manifold underlies multiple motor behaviors,” Nature communications, vol. 9, no. 1, p. 4233, 2018.
- R. Mitchell-Heggs, S. Prado, G. P. Gava, M. A. Go, and S. R. Schultz, “Neural manifold analysis of brain circuit dynamics in health and disease,” Journal of Computational Neuroscience, vol. 51, no. 1, pp. 1–21, 2023.
- B. M. Karpowicz, Y. H. Ali, L. N. Wimalasena, A. R. Sedler, M. R. Keshtkaran, K. Bodkin, X. Ma, L. E. Miller, and C. Pandarinath, “Stabilizing brain-computer interfaces through alignment of latent dynamics,” bioRxiv, 2022.
- A. D. Degenhart, W. E. Bishop, E. R. Oby, E. C. Tyler-Kabara, S. M. Chase, A. P. Batista, and B. M. Yu, “Stabilization of a brain–computer interface via the alignment of low-dimensional spaces of neural activity,” Nature biomedical engineering, vol. 4, no. 7, pp. 672–685, 2020.
- J. Lee, M. Dabagia, E. Dyer, and C. Rozell, “Hierarchical optimal transport for multimodal distribution alignment,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- R. Liu, M. Azabou, M. Dabagia, J. Xiao, and E. Dyer, “Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers,” Advances in neural information processing systems, vol. 35, pp. 2377–2391, 2022.
- A. Farshchian, J. A. Gallego, J. P. Cohen, Y. Bengio, L. E. Miller, and S. A. Solla, “Adversarial domain adaptation for stable brain-machine interfaces,” arXiv preprint arXiv:1810.00045, 2018.
- X. Ma, F. Rizzoglio, E. J. Perreault, L. E. Miller, and A. Kennedy, “Using adversarial networks to extend brain computer interface decoding accuracy over time,” bioRxiv, pp. 2022–08, 2022.
- W. Zhao, L. Mou, J. Chen, Y. Bo, and W. J. Emery, “Incorporating metric learning and adversarial network for seasonal invariant change detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 4, pp. 2720–2731, 2019.
- Z. Zhang, M. Li, and J. Yu, “On the convergence and mode collapse of gan,” in SIGGRAPH Asia 2018 Technical Briefs, pp. 1–4, 2018.
- Y. Wu, E. Winston, D. Kaushik, and Z. Lipton, “Domain adaptation with asymmetrically-relaxed distribution alignment,” in International Conference on Machine Learning, pp. 6872–6881, PMLR, 2019.
- Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochastic differential equations,” arXiv preprint arXiv:2011.13456, 2020.
- H. Xue, A. Araujo, B. Hu, and Y. Chen, “Diffusion-based adversarial sample generation for improved stealthiness and controllability,” arXiv preprint arXiv:2305.16494, 2023.
- A. Grosmark, J. Long, and G. Buzsáki, “Recordings from hippocampal area ca1, pre, during and post novel spatial learning,” CRCNS. org, vol. 10, p. K0862DC5, 2016.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851, 2020.
- P. Vincent, “A connection between score matching and denoising autoencoders,” Neural computation, vol. 23, no. 7, pp. 1661–1674, 2011.
- R. Wei and A. Mahmood, “Recent advances in variational autoencoders with representation learning for biomedical informatics: A survey,” Ieee Access, vol. 9, pp. 4939–4956, 2020.
- C. Pandarinath, D. J. O’Shea, J. Collins, R. Jozefowicz, S. D. Stavisky, J. C. Kao, E. M. Trautmann, M. T. Kaufman, S. I. Ryu, L. R. Hochberg, et al., “Inferring single-trial neural population dynamics using sequential auto-encoders,” Nature methods, vol. 15, no. 10, pp. 805–815, 2018.
- T. Le and E. Shlizerman, “Stndt: Modeling neural population activity with a spatiotemporal transformer,” arXiv preprint arXiv:2206.04727, 2022.
- N. Courty, R. Flamary, A. Habrard, and A. Rakotomamonjy, “Joint distribution optimal transportation for domain adaptation,” Advances in neural information processing systems, vol. 30, 2017.
- T. Kerdoncuff, R. Emonet, and M. Sebban, “Metric learning in optimal transport for domain adaptation,” in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 2162–2168, 2021.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, pp. 234–241, Springer, 2015.
- W. Peebles and S. Xie, “Scalable diffusion models with transformers,” arXiv preprint arXiv:2212.09748, 2022.
- Y. Tashiro, J. Song, Y. Song, and S. Ermon, “Csdi: Conditional score-based diffusion models for probabilistic time series imputation,” Advances in Neural Information Processing Systems, vol. 34, pp. 24804–24816, 2021.
- R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural ordinary differential equations,” Advances in neural information processing systems, vol. 31, 2018.
- W. Grathwohl, R. T. Chen, J. Bettencourt, I. Sutskever, and D. Duvenaud, “Ffjord: Free-form continuous dynamics for scalable reversible generative models,” arXiv preprint arXiv:1810.01367, 2018.
- B. Oksendal, Stochastic differential equations: an introduction with applications. Springer Science & Business Media, 2013.
- Y. Song, C. Durkan, I. Murray, and S. Ermon, “Maximum likelihood training of score-based diffusion models,” Advances in Neural Information Processing Systems, vol. 34, pp. 1415–1428, 2021.
- J. Skilling, “The eigenvalues of mega-dimensional matrices,” Maximum Entropy and Bayesian Methods: Cambridge, England, 1988, pp. 455–466, 1989.
- M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” Advances in neural information processing systems, vol. 26, 2013.
- M. Menéndez, J. Pardo, L. Pardo, and M. Pardo, “The jensen-shannon divergence,” Journal of the Franklin Institute, vol. 334, no. 2, pp. 307–318, 1997.
- R. Cai, J. Chen, Z. Li, W. Chen, K. Zhang, J. Ye, Z. Li, X. Yang, and Z. Zhang, “Time series domain adaptation via sparse associative structure alignment,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 6859–6867, 2021.
- Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The journal of machine learning research, vol. 17, no. 1, pp. 2096–2030, 2016.
- C.-L. Li, W.-C. Chang, Y. Cheng, Y. Yang, and B. Póczos, “Mmd gan: Towards deeper understanding of moment matching network,” Advances in neural information processing systems, vol. 30, 2017.
- Y. Park, D. Maddix, F.-X. Aubet, K. Kan, J. Gasthaus, and Y. Wang, “Learning quantile functions without quantile crossing for distribution-free time series forecasting,” in International Conference on Artificial Intelligence and Statistics, pp. 8127–8150, PMLR, 2022.
- G. Loaiza-Ganem and J. P. Cunningham, “The continuous bernoulli: fixing a pervasive error in variational autoencoders,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- D. Zhou and X.-X. Wei, “Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-vae,” Advances in Neural Information Processing Systems, vol. 33, pp. 7234–7247, 2020.
- O. Fabius and J. R. Van Amersfoort, “Variational recurrent auto-encoders,” arXiv preprint arXiv:1412.6581, 2014.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
- A. D. Rasamoelina, F. Adjailia, and P. Sinčák, “A review of activation function for artificial neural network,” in 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), pp. 281–286, IEEE, 2020.
- D. Kobak, W. Brendel, C. Constantinidis, C. E. Feierstein, A. Kepecs, Z. F. Mainen, X.-L. Qi, R. Romo, N. Uchida, and C. K. Machens, “Demixed principal component analysis of neural population data,” Elife, vol. 5, p. e10989, 2016.
- D. Sussillo, R. Jozefowicz, L. Abbott, and C. Pandarinath, “Lfads-latent factor analysis via dynamical systems,” arXiv preprint arXiv:1608.06315, 2016.
- K. Han, Y. Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y. Tang, A. Xiao, C. Xu, Y. Xu, et al., “A survey on vision transformer,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 1, pp. 87–110, 2022.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- P. Major and P. Major, Multiple Wiener-Itô integrals. Springer, 1981.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.