- The paper introduces vLGP, integrating a Gaussian process prior with point process models to accurately extract single-trial latent neural trajectories.
- It demonstrates improved efficiency and accuracy over methods like GPFA and PLDS using both simulated nonlinear dynamics and real V1 neural recordings.
- The method successfully uncovers complex latent structures that encode stimulus orientations, offering deeper insights into neural signal and noise variability.
Variational Latent Gaussian Process for Recovering Single-Trial Dynamics from Population Spike Trains
The paper proposes a novel method called Variational Latent Gaussian Process (vLGP), which is an efficient inference technique for recovering latent trajectories from neural population recordings. The primary innovation of vLGP is its integration of a generative model with history-dependent point process observations combined with a smoothness prior on the latent trajectories, offering an improvement over previous methods which either used inappropriate observation models or assumed linear dynamics.
Introduction and Motivations
Understanding the low-dimensional dynamics of neural populations is crucial to comprehending neural computations that drive behavior. Traditional methods often rely on averaging neural spike train responses across repeated trials, presuming that neural firing is stereotypical and time-locked. However, this assumption neglects the real variability present in single-trial dynamics, particularly for cognitive processes involving decision-making and reaction times that naturally introduce trial-to-trial variation.
Recent advances in recording large neural populations permit single-trial analysis, allowing recovery of single-trial latent dynamical trajectories. Prior approaches like GPFA (Gaussian Process Factor Analysis) and PLDS (Poisson Linear Dynamical Systems) have limitations in inference accuracy and computational complexity. The vLGP method aims to improve upon these by relaxing assumptions on the latent dynamics model and adopting a Gaussian process prior for nonparametric inference, effectively capturing the nonlinear dynamics of neural populations at a fine timescale.
Methodology and Implementation
Generative Model
The generative model for vLGP considers simultaneously recorded spike trains from N neurons within a population, modeled as simple point processes with conditional intensity functions. The latent trajectory xt​ is inferred under a Gaussian process prior encoding temporal smoothness, and neurons are modeled to rely on both shared latent dynamics and self-history influences.
The vLGP approach involves optimizing an Evidence Lower Bound (ELBO), leveraging variational inference with approximations over the latent posterior distribution, assumed to be Gaussian. This approximation allows efficient calculation of the latent processes, while preserving the inferential power to capture complex neural dynamics.
Variational Inference Algorithm
Pseudocode for the vLGP inference algorithm outlines the sequential updating of latent processes, model weights, and hyperparameters. Specific computational techniques such as incomplete Cholesky decompositions for large covariance matrices and coordinate-wise updates ensure scalability and numerical stability, making vLGP markedly faster than comparison methods like PLDS, especially for high-dimensional and low timescale datasets.
Results
Simulation Studies
Simulation using both Lorenz nonlinear dynamics and a linear dynamical system (LDS) with mismatched nonlinear observations demonstrates vLGP's efficacy. Posterior means for latent trajectories closely follow true latent dynamics, significantly outperforming PLDS and GPFA in terms of inference speed and accuracy, as quantified by rank correlation and predictive log-likelihood.
Real Data Application
The vLGP method was applied to a high-dimensional dataset of V1 neural recordings. The proposed model reveals both noise correlation structures and signal-driven latent dynamics that encode stimulus orientation within a toroidal topology. Cross-validation confirms the 5-dimensional model efficiently captures both signal and noise variability, demonstrating vLGP's practical utility in neural computation analysis.
Conclusion
The vLGP represents a significant advancement in latent state-space modeling for neural data, offering improved scalability and accuracy without restrictive assumptions on the latent dynamics. Its application to varied datasets highlights the potential for uncovering complex computational strategies encoded within neural populations, paving the way for broader investigations into neural mechanisms underlying cognition and behavior. Future research could extend vLGP to incorporate diverse GP kernels or capture specific neural oscillations, enhancing its applicability across different functional domains in neuroscience.