Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Dynamic Domain Adaptation Deep Learning Network for EEG-based Motor Imagery Classification (2309.11714v1)

Published 21 Sep 2023 in eess.SP, cs.AI, and cs.LG

Abstract: There is a correlation between adjacent channels of electroencephalogram (EEG), and how to represent this correlation is an issue that is currently being explored. In addition, due to inter-individual differences in EEG signals, this discrepancy results in new subjects need spend a amount of calibration time for EEG-based motor imagery brain-computer interface. In order to solve the above problems, we propose a Dynamic Domain Adaptation Based Deep Learning Network (DADL-Net). First, the EEG data is mapped to the three-dimensional geometric space and its temporal-spatial features are learned through the 3D convolution module, and then the spatial-channel attention mechanism is used to strengthen the features, and the final convolution module can further learn the spatial-temporal information of the features. Finally, to account for inter-subject and cross-sessions differences, we employ a dynamic domain-adaptive strategy, the distance between features is reduced by introducing a Maximum Mean Discrepancy loss function, and the classification layer is fine-tuned by using part of the target domain data. We verify the performance of the proposed method on BCI competition IV 2a and OpenBMI datasets. Under the intra-subject experiment, the accuracy rates of 70.42% and 73.91% were achieved on the OpenBMI and BCIC IV 2a datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Y. Kwak, K. Kong, W.-J. Song, and S.-E. Kim, “Subject-invariant deep neural networks based on baseline correction for eeg motor imagery bci,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 4, pp. 1801–1812, 2023.
  2. Y. Zhang and W. Ding, “Motor imagery classification via stacking-based takagi–sugeno–kang fuzzy classifier ensemble,” Knowledge-Based Systems, vol. 263, p. 110292, 2023.
  3. J. Ai, J. Meng, X. Mai, and X. Zhu, “Bci control of a robotic arm based on ssvep with moving stimuli for reach and grasp tasks,” IEEE Journal of Biomedical and Health Informatics, 2023.
  4. N. K. Al-Qazzaz, Z. A. A. Alyasseri, K. H. Abdulkareem, N. S. Ali, M. N. Al-Mhiqani, and C. Guger, “EEG feature fusion for motor imagery: A new robust framework towards stroke patients rehabilitation,” Computers in Biology and Medicine, vol. 137, p. 104799, 2021.
  5. M. M. Amini and V. Shalchyan, “Designing a motion-onset visual evoked potential-based brain-computer interface to control a computer game,” IEEE Transactions on Games, 2023.
  6. H. Fang, J. Jin, I. Daly, and X. Wang, “Feature extraction method based on filter banks and riemannian tangent space in motor-imagery bci,” IEEE journal of biomedical and health informatics, vol. 26, no. 6, pp. 2504–2514, 2022.
  7. L.-W. Ko, Y.-C. Lu, H. Bustince, Y.-C. Chang, Y. Chang, J. Ferandez, Y.-K. Wang, J. A. Sanz, G. P. Dimuro, and C.-T. Lin, “Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface,” IEEE Computational Intelligence Magazine, vol. 14, no. 1, pp. 96–106, 2019.
  8. H. Zhu, D. Forenzo, and B. He, “On the deep learning models for EEG-based brain-computer interface using motor imagery,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 2283–2291, 2022.
  9. V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces,” Journal of neural engineering, vol. 15, no. 5, p. 056013, 2018.
  10. R. Mane, E. Chew, K. Chua, K. K. Ang, N. Robinson, A. P. Vinod, S.-W. Lee, and C. Guan, “FBCNet: A multi-view convolutional neural network for brain-computer interface,” arXiv preprint, 2104.01233, 2021.
  11. D. Gao, K. Wang, M. Wang, J. Zhou, and Y. Zhang, “Sft-net: A network for detecting fatigue from eeg signals by combining 4d feature flow and attention mechanism,” IEEE Journal of Biomedical and Health Informatics, 2023.
  12. X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, and L. Sun, “A multi-branch 3D convolutional neural network for EEG-based motor imagery classification,” IEEE transactions on neural systems and rehabilitation engineering, vol. 27, no. 10, pp. 2164–2177, 2019.
  13. J. Cui, Z. Lan, O. Sourina, and W. Müller-Wittig, “EEG-Based cross-subject driver drowsiness recognition with an interpretable convolutional neural network,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  14. D. Wu, X. Jiang, and R. Peng, “Transfer learning for motor imagery based brain-computer interfaces: A tutorial,” Neural Networks, 2022.
  15. R. Zhang, Q. Zong, L. Dou, X. Zhao, Y. Tang, and Z. Li, “Hybrid deep neural network using transfer learning for eeg motor imagery decoding,” Biomedical Signal Processing and Control, vol. 63, p. 102144, 2021.
  16. K. Zhang, N. Robinson, S.-W. Lee, and C. Guan, “Adaptive transfer learning for EEG motor imagery classification with deep convolutional neural network,” Neural Networks, vol. 136, pp. 1–10, 2021.
  17. X. Zhang, Z. Miao, C. Menon, Y. Zheng, M. Zhao, and D. Ming, “Priming cross-session motor imagery classification with a universal deep domain adaptation framework,” Neurocomputing, vol. 556, p. 126659, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231223007828
  18. X. Liu, R. Shi, Q. Hui, S. Xu, S. Wang, R. Na, Y. Sun, W. Ding, D. Zheng, and X. Chen, “TCACNet: Temporal and channel attention convolutional network for motor imagery classification of EEG-based BCI,” Information Processing & Management, vol. 59, no. 5, p. 103001, 2022.
  19. Z. Li, J. Wang, Z. Jia, and Y. Lin, “Learning space-time-frequency representation with two-stream attention based 3D network for motor imagery classification,” in 2020 IEEE International Conference on Data Mining (ICDM), 2020, pp. 1124–1129.
  20. E. Su, S. Cai, L. Xie, H. Li, and T. Schultz, “Stanet: A spatiotemporal attention network for decoding auditory spatial attention from eeg,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 7, pp. 2233–2242, 2022.
  21. W. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, “Eeg-based emotion recognition via channel-wise attention and self attention,” IEEE Transactions on Affective Computing, vol. 14, no. 1, pp. 382–393, 2023.
  22. X. Peng, Z. Huang, X. Sun, and K. Saenko, “Domain agnostic learning with disentangled representations,” in International Conference on Machine Learning, 2019, pp. 5102–5112.
  23. J. Wen, R. Greiner, and D. Schuurmans, “Domain aggregation networks for multi-source domain adaptation,” in International Conference on Machine Learning, 2020, pp. 10 214–10 224.
  24. V.-A. Nguyen, T. Nguyen, T. Le, Q. H. Tran, and D. Phung, “Stem: An approach to multi-source domain adaptation with guarantees,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9352–9363.
  25. M. Long, Y. Cao, J. Wang, and M. Jordan, “Learning transferable features with deep adaptation networks,” in International conference on machine learning, 2015, pp. 97–105.
  26. W. Jiang, G. Meng, T. Jiang, and N. Zuo, “Generalization across subjects and sessions for eeg-based emotion recognition using multi-source attention-based dynamic residual transfer,” in 2023 International Joint Conference on Neural Networks (IJCNN), 2023, pp. 1–8.
  27. G. Bao, N. Zhuang, L. Tong, B. Yan, J. Shu, L. Wang, Y. Zeng, and Z. Shen, “Two-level domain adaptation neural network for eeg-based emotion recognition,” Frontiers in Human Neuroscience, vol. 14, 2021.
  28. X. Chai, Q. Wang, Y. Zhao, X. Liu, O. Bai, and Y. Li, “Unsupervised domain adaptation techniques based on auto-encoder for non-stationary EEG-based emotion recognition,” Computers in biology and medicine, vol. 79, pp. 205–214, 2016.
  29. H. Chen, M. Jin, Z. Li, C. Fan, J. Li, and H. He, “Ms-mda: Multisource marginal distribution adaptation for cross-subject and cross-session EEG emotion recognition,” Frontiers in Neuroscience, vol. 15, 2021.
  30. D. Zhang, K. Chen, D. Jian, and L. Yao, “Motor imagery classification via temporal attention cues of graph embedded eeg signals,” IEEE journal of biomedical and health informatics, vol. 24, no. 9, pp. 2570–2579, 2020.
  31. H. Altaheri, G. Muhammad, and M. Alsulaiman, “Dynamic convolution with multilevel attention for eeg-based motor imagery decoding,” IEEE Internet of Things Journal, pp. 1–1, 2023.
  32. A. Li, Z. Wang, X. Zhao, T. Xu, T. Zhou, and H. Hu, “Mdtl: A novel and model-agnostic transfer learning strategy for cross-subject motor imagery bci,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 1743–1753, 2023.
  33. Z. Jia, Y. Lin, X. Cai, H. Chen, H. Gou, and J. Wang, “Sst-emotionnet: Spatial-spectral-temporal based attention 3d dense network for eeg emotion recognition,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2909–2917.
  34. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  35. M.-H. Lee, O.-Y. Kwon, Y.-J. Kim, H.-K. Kim, Y.-E. Lee, J. Williamson, S. Fazli, and S.-W. Lee, “EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy,” GigaScience, vol. 8, no. 5, p. giz002, 2019.
  36. M. Tangermann, K.-R. Müller, A. Aertsen, N. Birbaumer, C. Braun, C. Brunner, R. Leeb, C. Mehring, K. J. Miller, G. Mueller-Putz et al., “Review of the BCI competition IV,” Frontiers in neuroscience, p. 55, 2012.
  37. P. Autthasan, R. Chaisaen, T. Sudhawiyangkul, P. Rangpong, S. Kiatthaveephong, N. Dilokthanakul, G. Bhakdisongkhram, H. Phan, C. Guan, and T. Wilaiprasitporn, “Min2net: End-to-end multi-task learning for subject-independent motor imagery EEG classification,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 6, pp. 2105–2118, 2021.
  38. K. Liu, M. Yang, Z. Yu, G. Wang, and W. Wu, “FBMSNet: a filter-bank multi-scale convolutional neural network for EEG-Based motor imagery decoding,” IEEE Transactions on Biomedical Engineering, 2022.
  39. Y. Ding, N. Robinson, Q. Zeng, D. Chen, A. A. P. Wai, T.-S. Lee, and C. Guan, “Tsception: a deep learning framework for emotion detection using EEG,” in 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–7.
  40. T. Das, L. Gohain, N. M. Kakoty, M. Malarvili, P. Widiyanti, and G. Kumar, “Hierarchical approach for fusion of electroencephalography and electromyography for predicting finger movements and kinematics using deep learning,” Neurocomputing, vol. 527, pp. 184–195, 2023.
  41. S. Bhattacharyya, A. Konar, D. N. Tibarewala, and M. Hayashibe, “A generic transferable EEG decoder for online detection of error potential in target selection,” Frontiers in neuroscience, vol. 11, p. 226, 2017.
  42. D. Zhao, F. Tang, B. Si, and X. Feng, “Learning joint space–time–frequency features for EEG decoding on small labeled data,” Neural Networks, vol. 114, pp. 67–77, 2019.
  43. G. Dai, J. Zhou, J. Huang, and N. Wang, “HS-CNN: a CNN with hybrid convolution scale for EEG motor imagery classification,” Journal of neural engineering, vol. 17, no. 1, p. 016025, 2020.

Summary

We haven't generated a summary for this paper yet.