Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining SNNs with Filtering for Efficient Neural Decoding in Implantable Brain-Machine Interfaces (2312.15889v2)

Published 26 Dec 2023 in cs.LG, cs.HC, cs.NE, and q-bio.NC

Abstract: While it is important to make implantable brain-machine interfaces (iBMI) wireless to increase patient comfort and safety, the trend of increased channel count in recent neural probes poses a challenge due to the concomitant increase in the data rate. Extracting information from raw data at the source by using edge computing is a promising solution to this problem, with integrated intention decoders providing the best compression ratio. Recent benchmarking efforts have shown recurrent neural networks to be the best solution. Spiking Neural Networks (SNN) emerge as a promising solution for resource efficient neural decoding while Long Short Term Memory (LSTM) networks achieve the best accuracy. In this work, we show that combining traditional signal processing techniques, namely signal filtering, with SNNs improve their decoding performance significantly for regression tasks, closing the gap with LSTMs, at little added cost. Results with different filters are shown with Bessel filters providing best performance. Two block-bidirectional Bessel filters have been used--one for low latency and another for high accuracy. Adding the high accuracy variant of the Bessel filters to the output of ANN, SNN and variants provided statistically significant benefits with maximum gains of $\approx 5\%$ and $8\%$ in $R2$ for two SNN topologies (SNN_Streaming and SNN_3D). Our work presents state of the art results for this dataset and paves the way for decoder-integrated-implants of the future.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. Pandarinath, C. & Al. High performance communication by people with paralysis using an intracortical brain-computer interface. ELife. pp. e18554 (2017)
  2. Ajiboye, W. & Al. Restoration of reaching and grasping move- ments through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. The Lancet. 10081 pp. 1821-1830 (2017)
  3. Metzger, S. & Et.al. A high-performance neuroprosthesis for speech decoding and avatar control. Nature. 620 pp. 1037-1046 (2023)
  4. Willett, F. & Et.al. A high-performance speech neuroprosthesis. Nature. 620 pp. 1031-1036 (2023)
  5. Yin, B. & Al. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates. Journal Of Neural Engineering. 10 (2013)
  6. Chen, N. & Al. Power-saving design opportunities for wireless intracortical brain–computer interfaces. Nature Biomedical Engineering. 4 pp. 984-996 (2020)
  7. Horowitz, M. 1.1 computing’s energy problem (and what we can do about it). 2014 IEEE International Solid-state Circuits Conference Digest Of Technical Papers (ISSCC). pp. 10-14 (2014)
Citations (2)

Summary

  • The paper demonstrates that integrating filtering techniques, especially block bidirectional filtering, significantly improves neural decoding accuracy.
  • It compares traditional ANNs with spiking neural networks, revealing that SNN streaming models offer an optimal balance between performance and computational efficiency.
  • Results suggest that combining temporal data processing methods with filtering techniques scales neural decoding for high-electrode-count implantable brain-machine interfaces.

ANN vs SNN: A Case Study for Neural Decoding in Implantable Brain-Machine Interfaces

Artificial Neural Networks vs Spiking Neural Networks

Implantable Brain-Machine Interfaces (iBMI) are gaining traction as assistive technologies enabling users to control external devices using their neural activity. The challenge, as the number of electrodes increases, is to manage the volume of data without compromising performance. This paper compares the efficacy of various neural network models, evaluating both artificial neural networks (ANN) and spiking neural networks (SNN) for motor decoding tasks, focusing on accuracy and computational efficiency.

Methodology

Dataset and Preprocessing

The research uses a dataset from non-human primates performing a cursor control task. The recorded neural data from the microelectrode arrays (MEA) are processed using three main methods:

  • Summation Method: Summing neural spikes over a specified duration.
  • Sub-Window Method: Dividing the summation window into smaller sub-windows to retain more detailed temporal information.
  • Streaming Method: Directly feeding the raw spike data into the model.

These methods are implemented to balance the trade-off between computational load and temporal resolution of the neural data.

Neural Network Models

Five types of models were evaluated:

  1. ANN: A straightforward artificial neural network with two hidden layers.
  2. ANN_3D: An ANN with an augmentative input preprocessing that considers sub-windowed input data.
  3. LSTM: Long Short-Term Memory neural network capable of capturing temporal dependencies.
  4. SNN_3D: A spiking neural network that processes sub-windowed input data.
  5. SNN_Streaming: A spiking neural network designed for minimal computational overhead by processing input data as a continuous stream.

Filtering for Improved Performance

Recognizing the noisiness in model outputs, the researchers incorporated Bessel filters to smooth the predictions. Three main filtering techniques were explored: forward filtering, bidirectional filtering, and block bidirectional filtering, each aiming to reduce prediction noise without adding excessive computational burden.

Results

Performance vs. Computational Cost

Accuracy Gains with Filtering:

Adding Bessel filters, particularly the block bidirectional filter, substantially improved decoding accuracy across all models. The most notable gains were in the ANN_3D model, with up to a 0.05 increase in R squared (R2).

Pareto Analysis: - Compute vs Accuracy: Models with block bid filtering led the way, with LSTM models showing high accuracy but at a computational cost. SNN_Streaming models offered a good compromise between accuracy and computational load. - Memory vs Accuracy: SNN models generally occupied efficient positions on the pareto curve, offering more significant memory economy compared to ANN models, especially when combined with traditional filtering.

Insights and Implications

The results indicate that combining traditional signal filtering techniques with neural network models is highly effective for neural decoding tasks. The block bidirectional filter was particularly beneficial, providing significant accuracy improvements with minimal computational and memory overhead, making it suitable for real-time applications.

Practical Implications

  • Scalability: The integration of filtering techniques allows for better scalability in neural decoding, crucial for future iBMI systems with high electrode counts.
  • Efficiency: SNN_Streaming models, combined with appropriate filtering, offer an excellent balance between accuracy and resource efficiency, vital for battery-powered iBMI systems.

Future Directions

Continued research is essential to explore efficient data normalization techniques that can maintain sparse activations while improving accuracy for SNN models. Additionally, combining different network models or dynamically switching between models based on the task stage could yield further benefits. Quantization of models to reduce memory footprint and exploring hardware-based filtering techniques to reduce latency also present promising avenues for future work.

Conclusion

This paper presented a detailed comparison of ANN and SNN models for iBMI, highlighting the benefits of combining neural decoding with traditional filtering techniques. The paper provides valuable insights into the balance between accuracy, computational cost, and memory footprint, which are essential factors in developing efficient and scalable neural decoding systems for future implantable devices.

X Twitter Logo Streamline Icon: https://streamlinehq.com