Self-Supervised Learning for Time Series: Contrastive or Generative? (2403.09809v1)
Abstract: Self-supervised learning (SSL) has recently emerged as a powerful approach to learning representations from large-scale unlabeled data, showing promising results in time series analysis. The self-supervised representation learning can be categorized into two mainstream: contrastive and generative. In this paper, we will present a comprehensive comparative study between contrastive and generative methods in time series. We first introduce the basic frameworks for contrastive and generative SSL, respectively, and discuss how to obtain the supervision signal that guides the model optimization. We then implement classical algorithms (SimCLR vs. MAE) for each type and conduct a comparative analysis in fair settings. Our results provide insights into the strengths and weaknesses of each approach and offer practical recommendations for choosing suitable SSL methods. We also discuss the implications of our findings for the broader field of representation learning and propose future research directions. All the code and data are released at \url{https://github.com/DL4mHealth/SSL_Comparison}.
- Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012, Vitoria-Gasteiz, Spain, December 3-5, 2012. Proceedings 4, pages 216–223. Springer, 2012.
- A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020.
- Made: Masked autoencoder for distribution estimation. In International conference on machine learning, pages 881–889. PMLR, 2015.
- Semi supervised autoencoder. In ICONIP, pages 82–89. Springer, 2016.
- Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
- Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009, 2022.
- Olivier Henaff. Data-efficient image recognition with contrastive predictive coding. In International conference on machine learning, pages 4182–4192. PMLR, 2020.
- Timeautoad: Autonomous anomaly detection with self-supervised contrastive loss for multivariate time series. IEEE Transactions on Network Science and Engineering, 9(3):1604–1619, 2022.
- A deep learning method combined sparse autoencoder with svm. In 2015 international conference on cyber-enabled distributed computing and knowledge discovery, pages 257–260. IEEE, 2015.
- Auto-encoding variational {{\{{Bayes}}\}}. In ICLR, 2013.
- Clocs: Contrastive learning of cardiac signals across space, time, and patients. In International Conference on Machine Learning, pages 5606–5615. PMLR, 2021.
- Ti-mae: Self-supervised masked time series autoencoders. arXiv preprint arXiv:2301.08871, 2023.
- Self-supervised pretraining isolated forest for outlier detection. In 2022 International Conference on Big Data, Information and Computer Network (BDICN), pages 306–310. IEEE, 2022.
- Ecg-based heart arrhythmia diagnosis through attentional convolutional neural networks. In 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS), pages 156–162. IEEE, 2021.
- Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 35(1):857–876, 2021.
- Self-supervised contrastive learning for medical time series: A systematic review. Sensors, 23(9):4221, 2023.
- Semi-supervised cross-subject emotion recognition based on stacked denoising autoencoder architecture using a fusion of multi-modal physiological signals. Entropy, 24(5):577, 2022.
- Generative semi-supervised learning for multivariate time series imputation. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 8983–8991, 2021.
- The impact of the mit-bih arrhythmia database. IEEE engineering in medicine and biology magazine, 20(3):45–50, 2001.
- Utilizing expert features for contrastive learning of time-series representations. In International Conference on Machine Learning, pages 16969–16989. PMLR, 2022.
- Unsupervised representation learning for time series with temporal neighborhood coding. In International Conference on Learning Representations.
- Pixel recurrent neural networks. In International conference on machine learning, pages 1747–1756. PMLR, 2016.
- Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103, 2008.
- Time series data augmentation for deep learning: A survey. IJCAI, 2020.
- Deep multi-instance contrastive learning with dual attention for anomaly precursor detection. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pages 91–99. SIAM, 2021.
- Timeclr: A self-supervised contrastive learning framework for univariate time series representation. Knowledge-Based Systems, 245:108606, 2022.
- Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8980–8987, 2022.
- Semi-supervised learning of bearing anomaly detection via deep variational autoencoders. 2019.
- A survey on masked autoencoder for self-supervised learning in vision and beyond. arXiv preprint arXiv:2208.00173, 2022.
- Self-supervised contrastive pre-training for time series via time-frequency consistency. In Advances in Neural Information Processing Systems, 2022.
- Ziyu Liu (47 papers)
- Azadeh Alavi (17 papers)
- Minyi Li (3 papers)
- Xiang Zhang (395 papers)