M3D: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition (2404.15615v3)
Abstract: Emotion decoding using Electroencephalography (EEG)-based affective brain-computer interfaces (aBCIs) plays a crucial role in affective computing but is limited by challenges such as EEG's non-stationarity, individual variability, and the high cost of large labeled datasets. While deep learning methods are effective, they require extensive computational resources and large data volumes, limiting their practical application. To overcome these issues, we propose Manifold-based Domain Adaptation with Dynamic Distribution (M3D), a lightweight, non-deep transfer learning framework. M3D consists of four key modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning. The data is mapped to an optimal Grassmann manifold space, enabling dynamic alignment of source and target domains. This alignment is designed to prioritize both marginal and conditional distributions, improving adaptation efficiency across diverse datasets. In classifier learning, the principle of structural risk minimization is applied to build robust classification models. Additionally, dynamic distribution alignment iteratively refines the classifier. The ensemble learning module aggregates classifiers from different optimization stages to leverage diversity and enhance prediction accuracy. M3D is evaluated on two EEG emotion recognition datasets using two validation protocols (cross-subject single-session and cross-subject cross-session) and a clinical EEG dataset for Major Depressive Disorder (MDD). Experimental results show that M3D outperforms traditional non-deep learning methods with a 4.47% average improvement and achieves deep learning-level performance with reduced data and computational requirements, demonstrating its potential for real-world aBCI applications.
- R. W. Picard, E. Vyzas, and J. Healey, “Toward machine emotional intelligence: analysis of affective physiological state,” IEEE Trans on Pattern Analysis and Machine Intelligence, vol. 23, no. 10, pp. 1175 – 1191, 2001.
- A. Scarinzi, “The mind and its stories. narrative universals and human emotion,” Orbis Litterarum, vol. 61, no. 4, pp. 339–340, 2006.
- T. E. Kraynak, A. L. Marsland, and P. J. Gianaros, “Neural mechanisms linking emotion with cardiovascular disease,” Current cardiology reports., vol. 20, no. 12, 2019.
- T. Musha, “Feature extraction from eegs associated with emotions,” Artif Life Robotics, vol. 1, 1997.
- M. Teplan et al., “Fundamentals of eeg measurement,” Measurement science review, vol. 2, no. 2, pp. 1–11, 2002.
- S. M. Alarcao and M. J. Fonseca, “Emotions recognition using eeg signals: A survey,” IEEE Transactions on Affective Computing, vol. 10, no. 3, pp. 374–393, 2017.
- V. Jayaram, M. Alamgir, Y. Altun, B. Schlkopf, and M. Grosse-Wentrup, “Transfer learning in brain-computer interfaces,” arXiv e-prints, 2015.
- X. Si, H. He, J. Yu, and D. Ming, “Cross-subject emotion recognition brain–computer interface based on fnirs and dbjnet,” Cyborg and Bionic Systems, vol. 4, p. 0045, 2023.
- W. L. Zheng and B. L. Lu, “Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks,” IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3, pp. 1–1, 2015.
- H. Li, Y.-M. Jin, W.-L. Zheng, and B.-L. Lu, “Cross-subject emotion recognition using deep adaptation networks,” in Neural Information Processing: 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, December 13–16, 2018, Proceedings, Part V 25. Springer, 2018, pp. 403–413.
- R. Zhou, Z. Zhang, H. Fu, L. Zhang, L. Li, G. Huang, F. Li, X. Yang, Y. Dong, Y.-T. Zhang et al., “Pr-pl: A novel prototypical representation based pairwise learning framework for emotion recognition using eeg signals,” IEEE Transactions on Affective Computing, 2023.
- J. Li, S. Qiu, C. Du, Y. Wang, and H. He, “Domain adaptation for eeg emotion recognition based on latent representation similarity,” IEEE Transactions on Cognitive and Developmental Systems, vol. PP, no. 99, pp. 1–1, 2019.
- X. Zhu and X. Wu, “Class noise vs. attribute noise: a quantitative study of their impacts,” Kluwer Academic Publishers, no. 3, 2003.
- S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 199–210, 2011.
- W.-L. Zheng and B.-L. Lu, “Personalizing eeg-based affective models with transfer learning,” in Proceedings of the twenty-fifth international joint conference on artificial intelligence, 2016, pp. 2732–2738.
- J. Wang, Y. Chen, L. Hu, X. Peng, and P. S. Yu, “Stratified transfer learning for cross-domain activity recognition,” 2018, pp. 1–10.
- Z. H. Liu, B. L. Lu, H. L. Wei, X. H. Li, and L. Chen, “Fault diagnosis for electromechanical drivetrains using a joint distribution optimal deep domain adaptation approach,” IEEE Sensors Journal, vol. PP, no. 99, pp. 1–1, 2019.
- M. Long, J. Wang, G. Ding, S. J. Pan, and S. Y. Philip, “Adaptation regularization: A general framework for transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 5, pp. 1076–1089, 2013.
- Y. Xu, X. Fang, J. Wu, X. Li, and D. Zhang, “Discriminative transfer subspace learning via low-rank and sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 2, pp. 850–863, 2015.
- C. A. Hou, Y. H. H. Tsai, Y. R. Yeh, and Y. C. F. Wang, “Unsupervised domain adaptation with label and structural consistency,” IEEE Transactions on Image Processing, pp. 5552–5562, 2016.
- J. Tahmoresnezhad and S. Hashemi, “Visual domain adaptation via transfer feature learning,” Knowledge and information systems, vol. 50, pp. 585–605, 2017.
- B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012, pp. 2066–2073.
- Vapnik and N. V., “An overview of statistical learning theory,” IEEE Trans Neural Netw, vol. 10, no. 5, pp. 988–999, 1999.
- M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” Journal of Machine Learning Research, vol. 7, no. 1, pp. 2399–2434, 2006.
- J. Hamm and D. D. Lee, “Grassmann discriminant analysis: a unifying view on subspace-based learning,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 376–383.
- M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in Proceedings of the 2013 IEEE International Conference on Computer Vision, 2013.
- J. Zhang, W. Li, and P. Ogunbona, “Joint geometrical and statistical alignment for visual domain adaptation,” in CVPR, 2017.
- J. Wang, Y. Chen, S. Hao, W. Feng, and Z. Shen, “Balanced distribution adaptation for transfer learning,” in IEEE International Conference on Data Mining, 2017.
- S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of representations for domain adaptation,” Advances in neural information processing systems, vol. 19, 2006.
- N. Iam-On and S. Garrett, “Linkclue: A matlab package for link-based cluster ensembles,” Journal of Statistical Software, vol. 36, pp. 1–36, 2010.
- Zheng, Wei-Long, Liu, Wei, Yifei, Cichocki, Andrzej, and Bao-Liang, “Emotionmeter: A multimodal framework for recognizing human emotions,” IEEE Transactions on Cybernetics, 2019.
- N. S. Altman, “An introduction to kernel and nearest-neighbor nonparametric regression.”
- C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
- H. H. Patel and P. Prajapati, “Study and analysis of decision tree based classification algorithms,” INTERNATIONAL JOURNAL OF COMPUTER SCIENCES AND ENGINEERING, vol. 6, no. 10, pp. 74–78, 2018.
- Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, 1997.
- I. Rish, “An empirical study of the naive bayes classifier,” Journal of Universal Computer Science, vol. 1, no. 2, p. 127, 2001.
- L. Breiman, “Bagging predictors,” Machine Learning, 1996.
- Y. Li, W. Zheng, Y. Zong, Z. Cui, T. Zhang, and X. Zhou, “A bi-hemisphere domain adversarial neural network model for eeg emotion recognition,” IEEE Transactions on Affective Computing, pp. 1–1, 2018.
- Y. Li, W. Zheng, L. Wang, Y. Zong, and Z. Cui, “From regional to global brain: A novel hierarchical spatial-temporal neural network model for eeg emotion recognition,” IEEE Transactions on Affective Computing, vol. PP, no. 99, pp. 1–1, 2019.
- Y. Li, W. Zheng, L. Wang, Y. Zong, L. Qi, Z. Cui, T. Zhang, and T. Song, “A novel bi-hemispheric discrepancy model for eeg emotion recognition,” 2019.
- B.-Q. Ma, H. Li, W.-L. Zheng, and B.-L. Lu, “Reducing the subject variability of eeg signals with adversarial domain generalization,” in Neural Information Processing: 26th International Conference, ICONIP 2019, Sydney, NSW, Australia, December 12–15, 2019, Proceedings, Part I 26. Springer, 2019, pp. 30–42.
- T. Song, W. Zheng, P. Song, and Z. Cui, “Eeg emotion recognition using dynamical graph convolutional neural networks,” IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532–541, 2018.
- P. O. Pinheiro, “Unsupervised domain adaptation with similarity learning,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8004–8013.
- Y. Peng, W. Wang, W. Kong, F. Nie, B.-L. Lu, and A. Cichocki, “Joint feature adaptation and graph adaptive label propagation for cross-subject emotion recognition from eeg signals,” IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 1941–1958, 2022.
- Z. Li, E. Zhu, M. Jin, C. Fan, H. He, T. Cai, and J. Li, “Dynamic domain adaptation for class-aware cross-subject and cross-session eeg emotion recognition,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 12, pp. 5964–5973, 2022.
- H. Chen, M. Jin, Z. Li, C. Fan, J. Li, and H. He, “Ms-mda: Multisource marginal distribution adaptation for cross-subject and cross-session eeg emotion recognition,” 2021.
- Y. Luo, S. Y. Zhang, W. L. Zheng, and B. L. Lu, “Wgan domain adaptation for eeg-based emotion recognition,” International Conference on Neural Information Processing, 2018.
- Breiman, “Random forests,” MACH LEARN, vol. 2001,45(1), no. -, pp. 5–32, 2001.
- D. Coomans and D. L. Massart, “Alternative k-nearest neighbour rules in supervised pattern recognition: Part 1. k-nearest neighbour classification by using alternative voting rules,” Analytica Chimica Acta, vol. 136, pp. 15–27, 1982.
- A. K. J., SuykensJ., and Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, 1999.
- T. Hastie, S. Rosset, J. Zhu, and H. Zou, “Multi-class adaboost,” International Press of Boston, no. 3, 2009.
- B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” 2015.
- B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, “Unsupervised visual domain adaptation using subspace alignment,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 2960–2967.
- B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14. Springer, 2016, pp. 443–450.
- J. Wang, M. I. Jordan, M. Long, and Y. Cao, “Learning transferable features with deep adaptation networks,” 2015.
- E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep domain confusion: Maximizing for domain invariance,” Computer Science, 2014.
- Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” 2015.