Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Knowledge-Driven Cross-view Contrastive Learning for EEG Representation (2310.03747v1)

Published 21 Sep 2023 in eess.SP, cs.AI, and cs.LG

Abstract: Due to the abundant neurophysiological information in the electroencephalogram (EEG) signal, EEG signals integrated with deep learning methods have gained substantial traction across numerous real-world tasks. However, the development of supervised learning methods based on EEG signals has been hindered by the high cost and significant label discrepancies to manually label large-scale EEG datasets. Self-supervised frameworks are adopted in vision and language fields to solve this issue, but the lack of EEG-specific theoretical foundations hampers their applicability across various tasks. To solve these challenges, this paper proposes a knowledge-driven cross-view contrastive learning framework (KDC2), which integrates neurological theory to extract effective representations from EEG with limited labels. The KDC2 method creates scalp and neural views of EEG signals, simulating the internal and external representation of brain activity. Sequentially, inter-view and cross-view contrastive learning pipelines in combination with various augmentation methods are applied to capture neural features from different views. By modeling prior neural knowledge based on homologous neural information consistency theory, the proposed method extracts invariant and complementary neural knowledge to generate combined representations. Experimental results on different downstream tasks demonstrate that our method outperforms state-of-the-art methods, highlighting the superior generalization of neural knowledge-supported EEG representations across various brain tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. D. P. Subha, P. K. Joseph, R. Acharya U, and C. M. Lim, “Eeg signal analysis: a survey,” Journal of medical systems, vol. 34, pp. 195–212, 2010.
  2. A. F. Jackson and D. J. Bolger, “The neurophysiological bases of eeg and eeg measurement: A review for the rest of us,” Psychophysiology, vol. 51, no. 11, pp. 1061–1071, 2014.
  3. A. Markovic, M. Kaess, and L. Tarokh, “Gender differences in adolescent sleep neurophysiology: a high-density sleep eeg study,” Scientific reports, vol. 10, no. 1, pp. 1–13, 2020.
  4. T. K. K. Ho and N. Armanfard, “Self-supervised learning for anomalous channel detection in eeg graphs: Application to seizure analysis,” arXiv preprint arXiv:2208.07448, 2022.
  5. S. Lee, Y. Yu, S. Back, H. Seo, and K. Lee, “Sleepyco: Automatic sleep scoring with feature pyramid and contrastive learning,” arXiv preprint arXiv:2209.09452, 2022.
  6. N. S. E. M. Noor and H. Ibrahim, “Machine learning algorithms and quantitative electroencephalography predictors for outcome prediction in traumatic brain injury: A systematic review,” IEEE Access, vol. 8, pp. 102075–102092, 2020.
  7. A. Dietrich and R. Kanso, “A review of eeg, erp, and neuroimaging studies of creativity and insight.,” Psychological bulletin, vol. 136, no. 5, p. 822, 2010.
  8. X.-W. Wang, D. Nie, and B.-L. Lu, “Emotional state classification from eeg data using machine learning approach,” Neurocomputing, vol. 129, pp. 94–106, 2014.
  9. A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (eeg) classification tasks: a review,” Journal of neural engineering, vol. 16, no. 3, p. 031001, 2019.
  10. Y. Song, X. Jia, L. Yang, and L. Xie, “Transformer-based spatial-temporal feature learning for eeg decoding,” arXiv preprint arXiv:2106.11170, 2021.
  11. R. Zhou, Z. Zhang, X. Yang, H. Fu, L. Zhang, L. Li, G. Huang, Y. Dong, F. Li, and Z. Liang, “A novel transfer learning framework with prototypical representation based pairwise learning for cross-subject cross-session eeg-based emotion recognition,” arXiv preprint arXiv:2202.06509, 2022.
  12. M. H. Rafiei, L. V. Gauthier, H. Adeli, and D. Takabi, “Self-supervised learning for electroencephalography,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  13. H.-Y. S. Chien, H. Goh, C. M. Sandino, and J. Y. Cheng, “Maeeg: Masked auto-encoder for eeg representation learning,” arXiv preprint arXiv:2211.02625, 2022.
  14. J. Han, X. Gu, and B. Lo, “Semi-supervised contrastive learning for generalizable motor imagery eeg classification,” in 2021 IEEE 17th International Conference on Wearable and Implantable Body Sensor Networks (BSN), pp. 1–4, IEEE, 2021.
  15. Y. Li, J. Chen, F. Li, B. Fu, H. Wu, Y. Ji, Y. Zhou, Y. Niu, G. Shi, and W. Zheng, “Gmss: Graph-based multi-task self-supervised learning for eeg emotion recognition,” IEEE Transactions on Affective Computing, 2022.
  16. M. Yang, Y. Li, Z. Huang, Z. Liu, P. Hu, and X. Peng, “Partially view-aligned representation learning with noise-robust contrastive loss,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1134–1143, 2021.
  17. Q. Wen, Z. Ouyang, C. Zhang, Y. Qian, Y. Ye, and C. Zhang, “Graph contrastive learning with cross-view reconstruction,” in NeurIPS 2022 Workshop: New Frontiers in Graph Learning.
  18. D. Zou, W. Wei, X.-L. Mao, Z. Wang, M. Qiu, F. Zhu, and X. Cao, “Multi-level cross-view contrastive learning for knowledge-aware recommender system,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1358–1368, 2022.
  19. S. K. Khare and V. Bajaj, “Time–frequency representation and convolutional neural network-based emotion recognition,” IEEE transactions on neural networks and learning systems, vol. 32, no. 7, pp. 2901–2909, 2020.
  20. W. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, “Eeg-based emotion recognition via channel-wise attention and self attention,” IEEE Transactions on Affective Computing, 2020.
  21. H. Altaheri, G. Muhammad, M. Alsulaiman, S. U. Amin, G. A. Altuwaijri, W. Abdul, M. A. Bencherif, and M. Faisal, “Deep learning techniques for classification of electroencephalogram (eeg) motor imagery (mi) signals: A review,” Neural Computing and Applications, pp. 1–42, 2021.
  22. S. U. Amin, M. Alsulaiman, G. Muhammad, M. A. Bencherif, and M. S. Hossain, “Multilevel weighted feature fusion using convolutional neural networks for eeg motor imagery classification,” Ieee Access, vol. 7, pp. 18940–18950, 2019.
  23. Y. Hou, S. Jia, X. Lun, Z. Hao, Y. Shi, Y. Li, R. Zeng, and J. Lv, “Gcns-net: a graph convolutional neural network approach for decoding time-resolved eeg motor imagery signals,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  24. S. Madhavan, R. K. Tripathy, and R. B. Pachori, “Time-frequency domain deep convolutional neural network for the classification of focal and non-focal eeg signals,” IEEE Sensors Journal, vol. 20, no. 6, pp. 3078–3086, 2019.
  25. J. Sun, R. Cao, M. Zhou, W. Hussain, B. Wang, J. Xue, and J. Xiang, “A hybrid deep neural network for classification of schizophrenia using eeg data,” Scientific Reports, vol. 11, no. 1, pp. 1–16, 2021.
  26. H. Banville, O. Chehab, A. Hyvärinen, D.-A. Engemann, and A. Gramfort, “Uncovering the structure of clinical eeg signals with self-supervised learning,” Journal of Neural Engineering, vol. 18, no. 4, p. 046020, 2021.
  27. J. Xu, Y. Zheng, Y. Mao, R. Wang, and W.-S. Zheng, “Anomaly detection on electroencephalography with self-supervised learning,” in 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 363–368, IEEE, 2020.
  28. D. Kostas, S. Aroca-Ouellette, and F. Rudzicz, “Bendr: using transformers and a contrastive self-supervised learning task to learn from massive amounts of eeg data,” Frontiers in Human Neuroscience, vol. 15, p. 653659, 2021.
  29. R. Li, Y. Wang, W.-L. Zheng, and B.-L. Lu, “A multi-view spectral-spatial-temporal masked autoencoder for decoding emotions with self-supervised learning,” in Proceedings of the 30th ACM International Conference on Multimedia, pp. 6–14, 2022.
  30. M. N. Mohsenvand, M. R. Izadi, and P. Maes, “Contrastive representation learning for electroencephalogram classification,” in Machine Learning for Health, pp. 238–253, PMLR, 2020.
  31. H. Kan, J. Yu, J. Huang, Z. Liu, and H. Zhou, “Self-supervised group meiosis contrastive learning for eeg-based emotion recognition,” arXiv preprint arXiv:2208.00877, 2022.
  32. X. Shen, X. Liu, X. Hu, D. Zhang, and S. Song, “Contrastive learning of subject-invariant eeg representations for cross-subject emotion recognition,” IEEE Transactions on Affective Computing, 2022.
  33. F. Deligianni, M. Centeno, D. W. Carmichael, and J. D. Clayden, “Relating resting-state fmri and eeg whole-brain connectomes across frequency bands,” Frontiers in neuroscience, vol. 8, p. 258, 2014.
  34. R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for eeg-based emotion classification,” in 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 81–84, IEEE, 2013.
  35. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009, 2022.
  36. M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VI, pp. 69–84, Springer, 2016.
  37. J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” in International Conference on Machine Learning, pp. 12310–12320, PMLR, 2021.
  38. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, pp. 1597–1607, PMLR, 2020.
  39. A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482–7491, 2018.
  40. W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks,” IEEE Transactions on autonomous mental development, vol. 7, no. 3, pp. 162–175, 2015.
  41. G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “Bci2000: a general-purpose brain-computer interface (bci) system,” IEEE Transactions on biomedical engineering, vol. 51, no. 6, pp. 1034–1043, 2004.
  42. A. H. Shoeb, Application of machine learning to epileptic seizure onset detection and treatment. PhD thesis, Massachusetts Institute of Technology, 2009.
  43. J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar, et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in neural information processing systems, vol. 33, pp. 21271–21284, 2020.
  44. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020.
  45. X. Chen and K. He, “Exploring simple siamese representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15750–15758, 2021.
  46. Y. Ou, S. Sun, H. Gan, R. Zhou, and Z. Yang, “An improved self-supervised learning for eeg classification,” Mathematical Biosciences and Engineering, vol. 19, no. 7, pp. 6907–6922, 2022.
  47. F. Shen, G. Dai, G. Lin, J. Zhang, W. Kong, and H. Zeng, “Eeg-based emotion recognition using 4d convolutional recurrent neural network,” Cognitive Neurodynamics, vol. 14, pp. 815–828, 2020.
  48. M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” Advances in neural information processing systems, vol. 29, 2016.
  49. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
Citations (3)

Summary

We haven't generated a summary for this paper yet.