EEG-DBNet: A Dual-Branch Network for Temporal-Spectral Decoding in Motor-Imagery Brain-Computer Interfaces (2405.16090v3)
Abstract: Motor imagery electroencephalogram (EEG)-based brain-computer interfaces (BCIs) offer significant advantages for individuals with restricted limb mobility. However, challenges such as low signal-to-noise ratio and limited spatial resolution impede accurate feature extraction from EEG signals, thereby affecting the classification accuracy of different actions. To address these challenges, this study proposes an end-to-end dual-branch network (EEG-DBNet) that decodes the temporal and spectral sequences of EEG signals in parallel through two distinct network branches. Each branch comprises a local convolutional block and a global convolutional block. The local convolutional block transforms the source signal from the temporal-spatial domain to the temporal-spectral domain. By varying the number of filters and convolution kernel sizes, the local convolutional blocks in different branches adjust the length of their respective dimension sequences. Different types of pooling layers are then employed to emphasize the features of various dimension sequences, setting the stage for subsequent global feature extraction. The global convolution block splits and reconstructs the feature of the signal sequence processed by the local convolution block in the same branch and further extracts features through the dilated causal convolutional neural networks. Finally, the outputs from the two branches are concatenated, and signal classification is completed via a fully connected layer. Our proposed method achieves classification accuracies of 85.84% and 91.60% on the BCI Competition 4-2a and BCI Competition 4-2b datasets, respectively, surpassing existing state-of-the-art models. The source code is available at https://github.com/xicheng105/EEG-DBNet.
- P. Lahane, J. Jagtap, A. Inamdar, N. Karne, and R. Dev, “A review of recent trends in EEG based brain-computer interface,” in 2019 International Conference on Computational Intelligence in Data Science (ICCIDS). IEEE, 2019, pp. 1–6.
- E. Pirondini, M. Coscia, J. Minguillon, J. d. R. Millán, D. Van De Ville, and S. Micera, “EEG topographies provide subject-specific correlates of motor control,” Scientific reports, vol. 7, no. 1, p. 13229, 2017.
- M. Orban, M. Elsamanty, K. Guo, S. Zhang, and H. Yang, “A review of brain activity and EEG-based brain–computer interfaces for rehabilitation application,” Bioengineering, vol. 9, no. 12, p. 768, 2022.
- S. Natheir, S. Christie, R. Yilmaz, A. Winkler-Schwartz, K. Bajunaid, A. J. Sabbagh, P. Werthner, J. Fares, H. Azarnoush, and R. Del Maestro, “Utilizing artificial intelligence and electroencephalography to assess expertise on a simulated neurosurgical task,” Computers in Biology and Medicine, vol. 152, p. 106286, 2023.
- M.-A. Li, J.-F. Han, and L.-J. Duan, “A novel MI-EEG imaging with the location information of electrodes,” IEEE Access, vol. 8, pp. 3197–3211, 2019.
- X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, and L. Sun, “A multi-branch 3D convolutional neural network for EEG-based motor imagery classification,” IEEE transactions on neural systems and rehabilitation engineering, vol. 27, no. 10, pp. 2164–2177, 2019.
- L. Fraiwan, K. Lweesy, N. Khasawneh, H. Wenz, and H. Dickhaus, “Automated sleep stage identification system based on time–frequency analysis of a single EEG channel and random forest classifier,” Computer methods and programs in biomedicine, vol. 108, no. 1, pp. 10–19, 2012.
- A. Subasi and M. I. Gursoy, “EEG signal classification using PCA, ICA, LDA and support vector machines,” Expert systems with applications, vol. 37, no. 12, pp. 8659–8666, 2010.
- J. Machado and A. Balbinot, “Executed movement using EEG signals through a naive Bayes classifier,” Micromachines, vol. 5, no. 4, pp. 1082–1105, 2014.
- M. Sha’Abani, N. Fuad, N. Jamal, and M. Ismail, “KNN and SVM classification for EEG: a review,” in InECCE2019: Proceedings of the 5th International Conference on Electrical, Control & Computer Engineering, Kuantan, Pahang, Malaysia, 29th July 2019. Springer, 2020, pp. 555–565.
- K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, and H. Zhang, “Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b,” Frontiers in neuroscience, vol. 6, p. 21002, 2012.
- S. K. Bashar, A. R. Hassan, and M. I. H. Bhuiyan, “Motor imagery movements classification using multivariate EMD and short time fourier transform,” in 2015 Annual IEEE India Conference (INDICON). IEEE, 2015, pp. 1–6.
- A. Jafarifarmand and M. A. Badamchizadeh, “EEG artifacts handling in a real practical brain–computer interface controlled vehicle,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 6, pp. 1200–1208, 2019.
- K. Zhu, S. Wang, D. Zheng, and M. Dai, “Study on the effect of different electrode channel combinations of motor imagery EEG signals on classification accuracy,” The Journal of Engineering, vol. 2019, no. 23, pp. 8641–8645, 2019.
- R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
- K. K. Ang, Z. Y. Chin, H. Zhang, and C. Guan, “Filter bank common spatial pattern (FBCSP) in brain-computer interface,” in 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence). IEEE, 2008, pp. 2390–2397.
- V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces,” Journal of neural engineering, vol. 15, no. 5, p. 056013, 2018.
- A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
- X. Chen, X. Teng, H. Chen, Y. Pan, and P. Geyer, “Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX,” Biomedical Signal Processing and Control, vol. 87, p. 105475, 2024.
- A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016.
- T. M. Ingolfsson, M. Hersche, X. Wang, N. Kobayashi, L. Cavigelli, and L. Benini, “EEG-TCNet: An accurate temporal convolutional network for embedded motor-imagery brain–machine interfaces,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020, pp. 2958–2965.
- C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager, “Temporal convolutional networks for action segmentation and detection,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 156–165.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- H. Altaheri, G. Muhammad, and M. Alsulaiman, “Physics-informed attention temporal convolutional network for EEG-based motor imagery classification,” IEEE transactions on industrial informatics, vol. 19, no. 2, pp. 2249–2258, 2022.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- A. Salami, J. Andreu-Perez, and H. Gillmeister, “EEG-ITNet: An explainable inception temporal convolutional network for motor imagery classification,” IEEE Access, vol. 10, pp. 36 672–36 685, 2022.
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
- G. A. Altuwaijri, G. Muhammad, H. Altaheri, and M. Alsulaiman, “A multi-branch convolutional neural network with squeeze-and-excitation attention blocks for EEG-based motor imagery signals classification,” Diagnostics, vol. 12, no. 4, p. 995, 2022.
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
- W. Tao, Z. Wang, C. M. Wong, Z. Jia, C. Li, X. Chen, C. P. Chen, and F. Wan, “ADFCNN: Attention-based dual-scale fusion convolutional neural network for motor imagery brain-computer interface,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.
- R. Mane, E. Chew, K. Chua, K. K. Ang, N. Robinson, A. P. Vinod, S.-W. Lee, and C. Guan, “FBCNet: A multi-view convolutional neural network for brain-computer interface,” arXiv preprint arXiv:2104.01233, 2021.
- Y. Li, L. Guo, Y. Liu, J. Liu, and F. Meng, “A temporal-spectral-based squeeze-and-excitation feature fusion network for motor imagery EEG decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 29, pp. 1534–1545, 2021.
- H. Zhi, Z. Yu, T. Yu, Z. Gu, and J. Yang, “A multi-domain convolutional neural network for EEG-based motor imagery decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.
- W.-Y. Hsu and Y.-W. Cheng, “EEG-Channel-Temporal-Spectral-Attention correlation for motor imagery EEG classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1659–1669, 2023.
- C. Brunner, R. Leeb, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, “BCI Competition 2008–Graz data set A,” Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, pp. 1–6, 2008.
- R. Leeb, C. Brunner, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, “BCI Competition 2008–Graz data set B,” Graz University of Technology, Austria, vol. 16, pp. 1–6, 2008.
- S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. pmlr, 2015, pp. 448–456.
- D.-A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” arXiv preprint arXiv:1511.07289, 2015.
- C. Garbin, X. Zhu, and O. Marques, “Dropout vs. batch normalization: an empirical study of their impact to deep learning,” Multimedia tools and applications, vol. 79, no. 19, pp. 12 777–12 815, 2020.
- A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv preprint arXiv:1803.08375, 2018.