Multi-Source Domain Adaptation with Transformer-based Feature Generation for Subject-Independent EEG-based Emotion Recognition (2401.02344v1)
Abstract: Although deep learning-based algorithms have demonstrated excellent performance in automated emotion recognition via electroencephalogram (EEG) signals, variations across brain signal patterns of individuals can diminish the model's effectiveness when applied across different subjects. While transfer learning techniques have exhibited promising outcomes, they still encounter challenges related to inadequate feature representations and may overlook the fact that source subjects themselves can possess distinct characteristics. In this work, we propose a multi-source domain adaptation approach with a transformer-based feature generator (MSDA-TF) designed to leverage information from multiple sources. The proposed feature generator retains convolutional layers to capture shallow spatial, temporal, and spectral EEG data representations, while self-attention mechanisms extract global dependencies within these features. During the adaptation process, we group the source subjects based on correlation values and aim to align the moments of the target subject with each source as well as within the sources. MSDA-TF is validated on the SEED dataset and is shown to yield promising results.
- “Brain-computer interfaces in medicine,” in Mayo clinic proceedings. Elsevier, 2012, vol. 87, pp. 268–279.
- “Identifying stable patterns over time for emotion recognition from EEG,” IEEE Trans. Affect. Comput., vol. 10, no. 3, pp. 417–429, 2017.
- “Transferring subspaces between subjects in brain–computer interfacing,” IEEE Trans. Biomed. Eng., vol. 60, no. 8, pp. 2289–2298, 2013.
- “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
- “EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces,” J. Neural Eng., vol. 15, no. 5, pp. 056013, 2018.
- “A hybrid end-to-end spatio-temporal attention neural network with graph-smooth signals for EEG emotion recognition,” IEEE Trans. Cogn. Develop. Syst., 2023.
- “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10012–10022.
- “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2009.
- “A kernel method for the two-sample-problem,” Adv. Neural Inf. Process. Syst., vol. 19, 2006.
- “M2d2: Maximum-mean-discrepancy decoder for temporal localization of epileptic brain activities,” IEEE J. Biomed. Health Inform., vol. 27, no. 1, pp. 202–214, 2022.
- “Adversarial discriminative domain adaptation,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognition (CVPR), 2017, pp. 7167–7176.
- “An end-to-end deep learning approach to MI-EEG signal classification for BCIs,” Expert Syst. Appl., vol. 114, pp. 532–542, 2018.
- “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- “Maximum classifier discrepancy for unsupervised domain adaptation,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognition (CVPR), 2018, pp. 3723–3732.
- “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1406–1415.
- “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks,” IEEE Trans. Auton. Ment. Dev., vol. 7, no. 3, pp. 162–175, 2015.
- “MSFR-GCN: A multi-scale feature reconstruction graph convolutional network for EEG emotion and cognition recognition,” IEEE Trans. Neural Syst. Rehabil. Eng., 2023.
- “Multisource associate domain adaptation for cross-subject and cross-session EEG emotion recognition,” IEEE Trans. Instrum. Meas., 2023.
- “Plug-and-play domain adaptation for cross-subject EEG-based emotion recognition,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 863–870.
- “An efficient LSTM network for emotion recognition from multichannel EEG signals,” IEEE Trans. Affect. Comput., vol. 13, no. 3, pp. 1528–1540, 2020.
- “Virtual and real world adaptation for pedestrian detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 4, pp. 797–809, 2013.
- Shadi Sartipi (6 papers)
- Mujdat Cetin (24 papers)