Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 43 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 197 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

EEG motor imagery decoding: A framework for comparative analysis with channel attention mechanisms (2310.11198v2)

Published 17 Oct 2023 in cs.HC, cs.AI, and cs.LG

Abstract: The objective of this study is to investigate the application of various channel attention mechanisms within the domain of brain-computer interface (BCI) for motor imagery decoding. Channel attention mechanisms can be seen as a powerful evolution of spatial filters traditionally used for motor imagery decoding. This study systematically compares such mechanisms by integrating them into a lightweight architecture framework to evaluate their impact. We carefully construct a straightforward and lightweight baseline architecture designed to seamlessly integrate different channel attention mechanisms. This approach is contrary to previous works which only investigate one attention mechanism and usually build a very complex, sometimes nested architecture. Our framework allows us to evaluate and compare the impact of different attention mechanisms under the same circumstances. The easy integration of different channel attention mechanisms as well as the low computational complexity enables us to conduct a wide range of experiments on four datasets to thoroughly assess the effectiveness of the baseline model and the attention mechanisms. Our experiments demonstrate the strength and generalizability of our architecture framework as well as how channel attention mechanisms can improve the performance while maintaining the small memory footprint and low computational complexity of our baseline architecture. Our architecture emphasizes simplicity, offering easy integration of channel attention mechanisms, while maintaining a high degree of generalizability across datasets, making it a versatile and efficient solution for EEG motor imagery decoding within brain-computer interfaces.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. M. A. Khan, R. Das, H. K. Iversen, and S. Puthusserypady, “Review on motor imagery based bci systems for upper limb post-stroke neurorehabilitation: From designing to application,” Computers in biology and medicine, vol. 123, p. 103843, 2020.
  2. C. Liu, J. Jin, I. Daly, S. Li, H. Sun, Y. Huang, X. Wang, and A. Cichocki, “Sincnet-based hybrid neural network for motor imagery EEG decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 540–549, 2022.
  3. Y. Li, L. Guo, Y. Liu, J. Liu, and F. Meng, “A temporal-spectral-based squeeze-and-excitation feature fusion network for motor imagery EEG decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 1534–1545, 2021.
  4. G. A. Altuwaijri, G. Muhammad, H. Altaheri, and M. Alsulaiman, “A multi-branch convolutional neural network with squeeze-and-excitation attention blocks for EEG-based motor imagery signals classification,” Diagnostics, vol. 12, no. 4, p. 995, 2022.
  5. Z. Jia, Y. Lin, J. Wang, K. Yang, T. Liu, and X. Zhang, “Mmcnn: A multi-branch multi-scale convolutional neural network for motor imagery classification,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part III, pp. 736–751, Springer, 2021.
  6. H. Zhang, X. Zhao, Z. Wu, B. Sun, and T. Li, “Motor imagery recognition with automatic EEG channel selection and deep learning,” Journal of Neural Engineering, vol. 18, no. 1, p. 016004, 2021.
  7. B. Sun, X. Zhao, H. Zhang, R. Bai, and T. Li, “EEG motor imagery classification with sparse spectrotemporal decomposition and deep learning,” IEEE Transactions on Automation Science and Engineering, vol. 18, no. 2, pp. 541–551, 2020.
  8. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, 2017.
  9. K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pp. 630–645, Springer, 2016.
  10. S. U. Amin, H. Altaheri, G. Muhammad, M. Alsulaiman, and W. Abdul, “Attention based inception model for robust EEG motor imagery classification,” in 2021 IEEE international instrumentation and measurement technology conference (I2MTC), pp. 1–6, IEEE, 2021.
  11. Y. Song, Q. Zheng, B. Liu, and X. Gao, “EEG Conformer: Convolutional transformer for EEG decoding and visualization,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 710–719, 2022.
  12. H. Altaheri, G. Muhammad, and M. Alsulaiman, “Physics-informed attention temporal convolutional network for EEG-based motor imagery classification,” IEEE Transactions on Industrial Informatics, vol. 19, no. 2, pp. 2249–2258, 2022.
  13. T. M. Ingolfsson, M. Hersche, X. Wang, N. Kobayashi, L. Cavigelli, and L. Benini, “EEG-TCNet: An accurate temporal convolutional network for embedded motor-imagery brain–machine interfaces,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2958–2965, IEEE, 2020.
  14. D. Zhang, K. Chen, D. Jian, and L. Yao, “Motor imagery classification via temporal attention cues of graph embedded EEG signals,” IEEE journal of biomedical and health informatics, vol. 24, no. 9, pp. 2570–2579, 2020.
  15. X. Ma, S. Qiu, and H. He, “Time-distributed attention network for EEG-based motor imagery decoding from the same limb,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 496–508, 2022.
  16. Z. Miao, M. Zhao, X. Zhang, and D. Ming, “Lmda-net: A lightweight multi-dimensional attention network for general EEG-based brain-computer interfaces and interpretability,” NeuroImage, p. 120209, 2023.
  17. X. Liu, R. Shi, Q. Hui, S. Xu, S. Wang, R. Na, Y. Sun, W. Ding, D. Zheng, and X. Chen, “Tcacnet: Temporal and channel attention convolutional network for motor imagery classification of EEG-based bci,” Information Processing & Management, vol. 59, no. 5, p. 103001, 2022.
  18. W. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, “EEG-based emotion recognition via channel-wise attention and self attention,” IEEE Transactions on Affective Computing, 2020.
  19. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  20. R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
  21. V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces,” Journal of neural engineering, vol. 15, no. 5, p. 056013, 2018.
  22. Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, “Eca-net: Efficient channel attention for deep convolutional neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11534–11542, 2020.
  23. B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Muller, “Optimizing spatial filters for robust EEG single-trial analysis,” IEEE Signal processing magazine, vol. 25, no. 1, pp. 41–56, 2007.
  24. K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, “Filter bank common spatial pattern (FBCSP) in brain-computer interface,” in 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), pp. 2390–2397, IEEE, 2008.
  25. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32 (H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, eds.), pp. 8024–8035, Curran Associates, Inc., 2019.
  26. W. Falcon and The PyTorch Lightning team, “PyTorch Lightning,” Mar. 2019.
  27. A. Gramfort, M. Luessi, E. Larson, D. A. Engemann, D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, et al., “Meg and eeg data analysis with mne-python,” Frontiers in neuroscience, p. 267, 2013.
  28. V. Jayaram and A. Barachant, “Moabb: trustworthy algorithm benchmarking for bcis,” Journal of neural engineering, vol. 15, no. 6, p. 066011, 2018.
  29. C. Brunner, R. Leeb, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, “Bci competition 2008–graz data set a,” Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, pp. 1–6, 2008.
  30. R. Leeb, C. Brunner, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, “Bci competition 2008–graz data set b,” Graz University of Technology, Austria, vol. 16, pp. 1–6, 2008.
  31. B. Blankertz, K.-R. Muller, D. J. Krusienski, G. Schalk, J. R. Wolpaw, A. Schlogl, G. Pfurtscheller, J. R. Millan, M. Schroder, and N. Birbaumer, “The bci competition iii: Validating alternative approaches to actual bci problems,” IEEE transactions on neural systems and rehabilitation engineering, vol. 14, no. 2, pp. 153–159, 2006.
  32. M.-H. Guo, T.-X. Xu, J.-J. Liu, Z.-N. Liu, P.-T. Jiang, T.-J. Mu, S.-H. Zhang, R. R. Martin, M.-M. Cheng, and S.-M. Hu, “Attention mechanisms in computer vision: A survey,” Computational visual media, vol. 8, no. 3, pp. 331–368, 2022.
  33. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141, 2018.
  34. Z. Gao, J. Xie, Q. Wang, and P. Li, “Global second-order pooling convolutional networks,” in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp. 3024–3033, 2019.
  35. Z. Qin, P. Zhang, F. Wu, and X. Li, “Fcanet: Frequency channel attention networks,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 783–792, 2021.
  36. H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal, “Context encoding for semantic segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7151–7160, 2018.
  37. J. Hu, L. Shen, S. Albanie, G. Sun, and A. Vedaldi, “Gather-excite: Exploiting feature context in convolutional neural networks,” Advances in neural information processing systems, vol. 31, 2018.
  38. Z. Yang, L. Zhu, Y. Wu, and Y. Yang, “Gated channel transformation for visual recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11794–11803, 2020.
  39. H. Lee, H.-E. Kim, and H. Nam, “Srm: A style-based recalibration module for convolutional neural networks,” in Proceedings of the IEEE/CVF International conference on computer vision, pp. 1854–1862, 2019.
  40. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018.
  41. Z. Wu, M. Wang, W. Sun, Y. Li, T. Xu, F. Wang, and K. Huang, “Cat: Learning to collaborate channel and spatial attention from multi-information fusion,” IET Computer Vision, vol. 17, no. 3, pp. 309–318, 2023.
  42. M. A. Romero-Laiseca, D. Delisle-Rodriguez, V. Cardoso, D. Gurve, F. Loterio, J. H. P. Nascimento, S. Krishnan, A. Frizera-Neto, and T. Bastos-Filho, “A low-cost lower-limb brain-machine interface triggered by pedaling motor imagery for post-stroke patients rehabilitation,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 4, pp. 988–996, 2020.
  43. R. Xu, N. Jiang, C. Lin, N. Mrachacz-Kersting, K. Dremstrup, and D. Farina, “Enhanced low-latency detection of motor intention from eeg for closed-loop brain-computer interface applications,” IEEE Transactions on biomedical engineering, vol. 61, no. 2, pp. 288–296, 2013.
  44. N. Mrachacz-Kersting, N. Jiang, A. J. T. Stevenson, I. K. Niazi, V. Kostic, A. Pavlovic, S. Radovanovic, M. Djuric-Jovicic, F. Agosta, K. Dremstrup, et al., “Efficient neuroplasticity induction in chronic stroke patients by an associative brain-computer interface,” Journal of neurophysiology, vol. 115, no. 3, pp. 1410–1421, 2016.
  45. D. Delisle-Rodriguez, V. Cardoso, D. Gurve, F. Loterio, M. A. Romero-Laiseca, S. Krishnan, and T. Bastos-Filho, “System based on subject-specific bands to recognize pedaling motor imagery: towards a bci for lower-limb rehabilitation,” Journal of neural engineering, vol. 16, no. 5, p. 056005, 2019.
  46. C. A. Stefano Filho, R. Attux, and G. Castellano, “Motor imagery practice and feedback effects on functional connectivity,” Journal of neural engineering, vol. 18, no. 6, p. 066048, 2022.
Citations (6)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets