EEG-Deformer: A Dense Convolutional Transformer for Brain-computer Interfaces (2405.00719v2)
Abstract: Effectively learning the temporal dynamics in electroencephalogram (EEG) signals is challenging yet essential for decoding brain activities using brain-computer interfaces (BCIs). Although Transformers are popular for their long-term sequential learning ability in the BCI field, most methods combining Transformers with convolutional neural networks (CNNs) fail to capture the coarse-to-fine temporal dynamics of EEG signals. To overcome this limitation, we introduce EEG-Deformer, which incorporates two main novel components into a CNN-Transformer: (1) a Hierarchical Coarse-to-Fine Transformer (HCT) block that integrates a Fine-grained Temporal Learning (FTL) branch into Transformers, effectively discerning coarse-to-fine temporal patterns; and (2) a Dense Information Purification (DIP) module, which utilizes multi-level, purified temporal information to enhance decoding accuracy. Comprehensive experiments on three representative cognitive tasks-cognitive attention, driving fatigue, and mental workload detection-consistently confirm the generalizability of our proposed EEG-Deformer, demonstrating that it either outperforms or performs comparably to existing state-of-the-art methods. Visualization results show that EEG-Deformer learns from neurophysiologically meaningful brain regions for the corresponding cognitive tasks. The source code can be found at https://github.com/yi-ding-cs/EEG-Deformer.
- X. Zhang, J. Liu, J. Shen, S. Li, K. Hou, B. Hu, J. Gao, T. Zhang, and B. Hu, “Emotion recognition from multimodal physiological signals using a regularized deep fusion of kernel machine,” IEEE Transactions on Cybernetics, pp. 1–14, 2020.
- F. Lotte and C. Guan, “Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms,” IEEE Transactions on biomedical Engineering, vol. 58, no. 2, pp. 355–362, 2010.
- R. Foong, K. K. Ang, C. Quek, C. Guan, K. S. Phua, C. W. K. Kuah, V. A. Deshmukh, L. H. L. Yam, D. K. Rajeswaran, N. Tang et al., “Assessment of the efficacy of EEG-based MI-BCI with visual feedback and EEG correlates of mental fatigue for upper-limb stroke rehabilitation,” IEEE Transactions on Biomedical Engineering, vol. 67, no. 3, pp. 786–795, 2019.
- V. Zotev, A. Mayeli, M. Misaki, and J. Bodurka, “Emotion self-regulation training in major depressive disorder using simultaneous real-time fMRI and EEG neurofeedback,” NeuroImage: Clinical, vol. 27, p. 102331, 2020.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021.
- Y. Song, Q. Zheng, B. Liu, and X. Gao, “EEG Conformer: Convolutional transformer for EEG decoding and visualization,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 710–719, 2023.
- P. Autthasan, R. Chaisaen, T. Sudhawiyangkul, P. Rangpong, S. Kiatthaveephong, N. Dilokthanakul, G. Bhakdisongkhram, H. Phan, C. Guan, and T. Wilaiprasitporn, “Min2net: End-to-end multi-task learning for subject-independent motor imagery eeg classification,” IEEE Transactions on Biomedical Engineering, vol. 69, no. 6, pp. 2105–2118, 2022.
- T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recognition using dynamical graph convolutional neural networks,” IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532–541, 2020.
- V. Delvigne, H. Wannous, T. Dutoit, L. Ris, and J.-P. Vandeborre, “Phydaa: Physiological dataset assessing attention,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 5, pp. 2612–2623, 2022.
- Y. Ding and C. Guan, “GIGN: Learning graph-in-graph representations of EEG signals for continuous emotion recognition,” in 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2023, pp. 1–5.
- Y. Ding, S. Zhang, C. Tang, and C. Guan, “MASA-TCN: Multi-anchor space-aware temporal convolutional neural networks for continuous and discrete EEG emotion recognition,” IEEE Journal of Biomedical and Health Informatics, pp. 1–12, 2024.
- R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
- V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces,” Journal of Neural Engineering, vol. 15, no. 5, p. 056013, Jul 2018.
- Y. Ding, N. Robinson, S. Zhang, Q. Zeng, and C. Guan, “TSception: Capturing temporal dynamics and spatial asymmetry from EEG for emotion recognition,” IEEE Transactions on Affective Computing, vol. 14, no. 3, pp. 2238–2250, 2023.
- Y. Ding, N. Robinson, C. Tong, Q. Zeng, and C. Guan, “LGGNet: Learning from local-global-graph representations for brain–computer interface,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2023.
- J. Xie, J. Zhang, J. Sun, Z. Ma, L. Qin, G. Li, H. Zhou, and Y. Zhan, “A transformer-based approach combining deep learning network and spatial-temporal information for raw EEG classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 2126–2136, 2022.
- J. Chen, Y. Zhang, Y. Pan, P. Xu, and C. Guan, “A transformer-based deep neural network model for ssvep classification,” Neural Networks, vol. 164, pp. 521–534, 2023.
- Y.-E. Lee and S.-H. Lee, “EEG-Transformer: Self-attention from transformer architecture for decoding EEG of imagined speech,” in 2022 10th International Winter Conference on Brain-Computer Interface (BCI), 2022, pp. 1–4.
- A. Sikka, H. Jamalabadi, M. Krylova, S. Alizadeh, J. N. van der Meer, L. Danyeli, M. Deliano, P. Vicheva, T. Hahn, T. Koenig, D. R. Bathula, and M. Walter, “Investigating the temporal dynamics of electroencephalogram (eeg) microstates using recurrent neural networks,” Human Brain Mapping, vol. 41, no. 9, pp. 2334–2346, 2020. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/hbm.24949
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269.
- R. Mane, N. Robinson, A. P. Vinod, S.-W. Lee, and C. Guan, “A multi-view cnn with novel variance layer for motor imagery brain computer interface,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020, pp. 2950–2953.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
- T.-P. Jung, S. Makeig, M. Stensmo, and T. Sejnowski, “Estimating alertness from the eeg power spectrum,” IEEE Transactions on Biomedical Engineering, vol. 44, no. 1, pp. 60–69, 1997.
- J. Shin, A. von Lühmann, D.-W. Kim, J. Mehnert, H.-J. Hwang, and K.-R. Müller, “Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset,” Scientific Data, vol. 5, no. 1, p. 180003, 2018.
- Z. Cao, C.-H. Chuang, J.-K. King, and C.-T. Lin, “Multi-channel EEG recordings during a sustained-attention driving task,” Scientific data, vol. 6, no. 1, pp. 1–8, 2019.
- I. Zyma, S. Tukaev, I. Seleznov, K. Kiyono, A. Popov, M. Chernykh, and O. Shpenkov, “Electroencephalograms during mental arithmetic task performance,” Data, vol. 4, no. 1, 2019.
- Y.-E. Lee and S.-H. Lee, “Eeg-transformer: Self-attention from transformer architecture for decoding eeg of imagined speech,” in 2022 10th International winter conference on brain-computer interface (BCI). IEEE, 2022, pp. 1–4.
- S. H. Tompson, E. B. Falk, J. M. Vettel, and D. S. Bassett, “Network approaches to understand individual differences in brain connectivity: opportunities for personality neuroscience,” Personality neuroscience, vol. 1, p. e5, 2018.
- K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.
- Y. Liu, J. Bengson, H. Huang, G. R. Mangun, and M. Ding, “Top-down Modulation of Neural Activity in Anticipatory Visual Attention: Control Mechanisms Revealed by Simultaneous EEG-fMRI,” Cerebral Cortex, vol. 26, no. 2, pp. 517–529, 09 2014.
- P. Flor-Henry, J. C. Lind, and Z. J. Koles, “EEG source analysis of chronic fatigue syndrome,” Psychiatry Research: Neuroimaging, vol. 181, no. 2, pp. 155–164, 2010.
- I. Kakkos, G. N. Dimitrakopoulos, Y. Sun, J. Yuan, G. K. Matsopoulos, A. Bezerianos, and Y. Sun, “EEG fingerprints of task-independent mental workload discrimination,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 10, pp. 3824–3833, 2021.
- Yi Ding (92 papers)
- Yong Li (628 papers)
- Hao Sun (383 papers)
- Rui Liu (320 papers)
- Chengxuan Tong (3 papers)
- Cuntai Guan (51 papers)
- Chenyu Liu (37 papers)
- Xinliang Zhou (11 papers)