TS-MoCo: Time-Series Momentum Contrast for Self-Supervised Physiological Representation Learning (2306.06522v1)
Abstract: Limited availability of labeled physiological data often prohibits the use of powerful supervised deep learning models in the biomedical machine intelligence domain. We approach this problem and propose a novel encoding framework that relies on self-supervised learning with momentum contrast to learn representations from multivariate time-series of various physiological domains without needing labels. Our model uses a transformer architecture that can be easily adapted to classification problems by optimizing a linear output classification layer. We experimentally evaluate our framework using two publicly available physiological datasets from different domains, i.e., human activity recognition from embedded inertial sensory and emotion recognition from electroencephalography. We show that our self-supervised learning approach can indeed learn discriminative features which can be exploited in downstream classification tasks. Our work enables the development of domain-agnostic intelligent systems that can effectively analyze multivariate time-series data from physiological domains.
- E. Eldele, M. Ragab, Z. Chen, M. Wu, C. K. Kwoh, X. Li, and C. Guan, “Time-series representation learning via temporal and contextual contrasting,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021, pp. 2352–2359.
- M. Han, O. Özdenizci, T. Koike-Akino, Y. Wang, and D. Erdoğmuş, “Universal physiological representation learning with soft-disentangled rateless autoencoders,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 8, pp. 2928–2937, 2021.
- O. Özdenizci and D. Erdoğmuş, “Stochastic mutual information gradient estimation for dimensionality reduction networks,” Information Sciences, vol. 570, pp. 298–305, 2021.
- D. Bethge, P. Hallgarten, T. Grosse-Puppendahl, M. Kari, L. L. Chuang, O. Özdenizci, and A. Schmidt, “EEG2Vec: Learning affective EEG representations via variational autoencoders,” in IEEE International Conference on Systems, Man, and Cybernetics, 2022, pp. 3150–3157.
- K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9729–9738.
- M. Panagiotou, A. Zlatintsi, P. Filntisis, A. Roumeliotis, N. Efthymiou, and P. Maragos, “A comparative study of autoencoder architectures for mental health analysis using wearable sensors data,” in 30th European Signal Processing Conference, 2022, pp. 1258–1262.
- J. Fix, I. Hinostroza, C. Ren, G. Manfredi, and T. Letertre, “Transfer learning for human activity classification in multiple radar setups,” in 30th European Signal Processing Conference, 2022, pp. 1576–1580.
- O. K. Cura, M. A. Ozdemir, and A. Akan, “Epileptic EEG classification using synchrosqueezing transform with machine and deep learning techniques,” in 28th European Signal Processing Conference, 2021, pp. 1210–1214.
- G. Geoffroy, L. Chaari, J.-Y. Tourneret, and H. Wendt, “Drowsiness detection using joint EEG-ECG data with deep learning,” in 29th European Signal Processing Conference, 2021, pp. 955–959.
- Y. Jain, C. I. Tang, C. Min, F. Kawsar, and A. Mathur, “ColloSSL: Collaborative self-supervised learning for human activity recognition,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 1, pp. 1–28, 2022.
- Z. Zhang, S.-h. Zhong, and Y. Liu, “GANSER: A self-supervised data augmentation framework for EEG-based emotion recognition,” IEEE Transactions on Affective Computing, 2022.
- J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 271–21 284, 2020.
- M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9650–9660.
- A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2021.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017.
- W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks,” IEEE Transactions on Autonomous Mental Development, 2015.
- R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for EEG-based emotion classification,” in 6th International IEEE/EMBS Conference on Neural Engineering. IEEE, 2013, pp. 81–84.
- D. Bethge, P. Hallgarten, T. Grosse-Puppendahl, M. Kari, R. Mikut, A. Schmidt, and O. Özdenizci, “Domain-invariant representation learning from EEG with private encoders,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2022, pp. 1236–1240.
- D. Bethge, P. Hallgarten, O. Özdenizci, R. Mikut, A. Schmidt, and T. Grosse-Puppendahl, “Exploiting multiple EEG data domains with adversarial learning,” in 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, 2022, pp. 3154–3158.
- D. Anguita, A. Ghio, L. Oneto, X. Parra Perez, and J. L. Reyes Ortiz, “A public domain dataset for human activity recognition using smartphones,” in Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 2013, pp. 437–442.
- M. Benčević, M. Habijan, I. Galić, and A. Pizurica, “Self-supervised learning as a means to reduce the need for labeled data in medical image analysis,” in 30th European Signal Processing Conference, 2022, pp. 1328–1332.
- Philipp Hallgarten (4 papers)
- David Bethge (5 papers)
- Ozan Özdenizci (27 papers)
- Tobias Grosse-Puppendahl (5 papers)
- Enkelejda Kasneci (97 papers)