Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neural population dynamics in songbird RA and HVC during learned motor-vocal behavior

Published 8 Jul 2024 in q-bio.NC | (2407.06244v1)

Abstract: Complex, learned motor behaviors involve the coordination of large-scale neural activity across multiple brain regions, but our understanding of the population-level dynamics within different regions tied to the same behavior remains limited. Here, we investigate the neural population dynamics underlying learned vocal production in awake-singing songbirds. We use Neuropixels probes to record the simultaneous extracellular activity of populations of neurons in two regions of the vocal motor pathway. In line with observations made in non-human primates during limb-based motor tasks, we show that the population-level activity in both the premotor nucleus HVC and the motor nucleus RA is organized on low-dimensional neural manifolds upon which coordinated neural activity is well described by temporally structured trajectories during singing behavior. Both the HVC and RA latent trajectories provide relevant information to predict vocal sequence transitions between song syllables. However, the dynamics of these latent trajectories differ between regions. Our state-space models suggest a unique and continuous-over-time correspondence between the latent space of RA and vocal output, whereas the corresponding relationship for HVC exhibits a higher degree of neural variability. We then demonstrate that comparable high-fidelity reconstruction of continuous vocal outputs can be achieved from HVC and RA neural latents and spiking activity. Unlike those that use spiking activity, however, decoding models using neural latents generalize to novel sub-populations in each region, consistent with the existence of preserved manifolds that confine vocal-motor activity in HVC and RA.

Citations (1)

Summary

  • The paper demonstrates that neural activity in RA and HVC can be represented as low-dimensional manifolds that capture structured trajectories and predict syllable transitions.
  • It employs high-density Neuropixels recordings and Gaussian-Processes Factor Analysis to robustly extract dynamic neural patterns during birdsong, validated by histological assessments.
  • The study develops a real-time brain-to-song decoder that translates neural dynamics into acoustic output, suggesting advancements for neural prosthetics and brain-machine interfaces.

Neural Population Dynamics in Songbird RA and HVC

The paper investigates the neural population dynamics involved in the learned motor-vocal behavior of songbirds, focusing on the interconnected brain regions: the premotor nucleus, HVC, and the motor nucleus, RA. Using Neuropixels probes, the study simultaneously records neural activity from these regions in zebra finches to elucidate how these neural circuits support vocal production.

Methodology

High-density Neuropixels silicon probes were implanted in freely-singing zebra finches to record the activity of HVC and RA neural populations during vocal behavior. Specifically, the study aimed to capture neural dynamics providing insights into how these regions coordinate to produce song. Recording sites were validated using post-mortem histological assessments (Figure 1). Figure 1

Figure 1: Multi-Region Neuropixels Recordings in Awake-Singing Songbirds, illustrating the experimental setup.

The study utilizes Gaussian-Processes Factor Analysis (GPFA) for dimensionality reduction, to project high-dimensional neural data onto low-dimensional manifolds. This manifold representation captures the neural population's covariance structure and its relation to song production, as the latent trajectories describe moment-to-moment population dynamics (Figure 2). Figure 2

Figure 2: The Neural Manifold Hypothesis proposes low-dimensional manifolds simplify the neural population activity required for complex behavior, such as birdsong production.

Results

Latent Neural Trajectories

Analysis of neural population activity revealed that both HVC and RA activity can be represented as low-dimensional manifolds delineating structured trajectories. RA neural trajectories exhibited less variability and provided a smoother representation corresponding to song motifs compared to HVC, which showed higher variance although maintaining temporal structure (Figure 3). Figure 3

Figure 3: State-Space Analysis, contrasting the smoother RA trajectories with the more variable HVC activity during song production.

Latent manifolds in RA showed significantly lower dispersion, affirming more consistent dynamics, attributed to behavioral output precision (Figure 4). In contrast, the dispersion in HVC trajectories was linked to its role in integrating sensory feedback and motor planning, supporting hypotheses of more intrinsic variability. Figure 4

Figure 4: Dimensionality-Dependent Analysis reveals differences in latent dispersion between RA and HVC across dimensionalities.

Syllable Distance and Transition Prediction

Further analyzing the manifold structure across neural states revealed distinguishable patterns of syllable-specific neural encoding. The study demonstrated the ability of both regions to predict transitions between syllables through distinct state changes, with RA providing stronger evidence for such predictability (Figure 5, Figure 6). Figure 5

Figure 5: Latent Manifold States vary by Syllable Distances, showing greater separability in RA states.

Figure 6

Figure 6: HVC and RA Latent States predict Vocal Transitions showcase the capacity for syllable transition prediction from neural states.

Brain-to-Song Decoder

A major application of the study was the development of a real-time brain-to-song decoder, translating neural activity into continuous acoustic outputs using a neural network architecture (Figure 7). The decoder performed similarly well using latent trajectories as with raw spike-trains, indicating the robustness of latent manifolds in capturing relevant neural information required for vocal synthesis. Figure 7

Figure 7: Brain-to-Song Decoder architecture showing the conversion of neural dynamics into song representations.

Implications and Future Directions

This investigation highlights the significance of low-dimensional population dynamics in complex motor behaviors like birdsong. The study's findings suggest potential extensions in brain-machine interface applications, particularly in decoding and synthesizing speech in humans. Future research could refine these models further, exploring higher-dimensional neural dynamics and testing diverse vocal behaviors to improve prosthetic designs or enhance understanding of other vocal learners, including humans.

Conclusion

The study elucidates the neural population dynamics underlying song production in birds, providing insight into the roles of HVC and RA in learning and executing complex vocalizations. By employing high-resolution recording techniques and dimensionality-reduction methods, this work contributes to the broader understanding of neural population dynamics in vocal learning and lays groundwork for advanced neural prosthetic developments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.