Semi-Supervised End-To-End Contrastive Learning For Time Series Classification
Abstract: Time series classification is a critical task in various domains, such as finance, healthcare, and sensor data analysis. Unsupervised contrastive learning has garnered significant interest in learning effective representations from time series data with limited labels. The prevalent approach in existing contrastive learning methods consists of two separate stages: pre-training the encoder on unlabeled datasets and fine-tuning the well-trained model on a small-scale labeled dataset. However, such two-stage approaches suffer from several shortcomings, such as the inability of unsupervised pre-training contrastive loss to directly affect downstream fine-tuning classifiers, and the lack of exploiting the classification loss which is guided by valuable ground truth. In this paper, we propose an end-to-end model called SLOTS (Semi-supervised Learning fOr Time clasSification). SLOTS receives semi-labeled datasets, comprising a large number of unlabeled samples and a small proportion of labeled samples, and maps them to an embedding space through an encoder. We calculate not only the unsupervised contrastive loss but also measure the supervised contrastive loss on the samples with ground truth. The learned embeddings are fed into a classifier, and the classification loss is calculated using the available true labels. The unsupervised, supervised contrastive losses and classification loss are jointly used to optimize the encoder and classifier. We evaluate SLOTS by comparing it with ten state-of-the-art methods across five datasets. The results demonstrate that SLOTS is a simple yet effective framework. When compared to the two-stage framework, our end-to-end SLOTS utilizes the same input data, consumes a similar computational cost, but delivers significantly improved performance. We release code and datasets at https://anonymous.4open.science/r/SLOTS-242E.
- Deap: A database for emotion analysis using physiological signals. IEEE Transactions on Affective Computing, 3(1):18–31, 2012.
- Identifying stable patterns over time for emotion recognition from EEG. IEEE Transactions on Affective Computing, 10(3):417–429, 2019.
- Early prediction of sepsis from clinical data: The physionet/computing in cardiology challenge 2019. Critical Care Medicine, 48(2):210–217, 2020.
- Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6):061907, 2001.
- A public domain dataset for human activity recognition using smartphones. In European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pages 437–442, 2013.
- TS2Vec: Towards universal representation of time series. In AAAI, pages 8980–8987, 2022.
- Mixing up contrastive learning: Self-supervised representation learning for time series. Pattern Recognition Letters, 155:54–61, 2022.
- Time-series representation learning via temporal and contextual contrasting. In International Joint Conference on Artificial Intelligence, pages 2352–2359, 2021.
- A simple framework for contrastive learning of visual representations. In ICML, page 1597–1607, 2020.
- Self-supervised contrastive pre-training for time series via time-frequency consistency. In NeurIPS, pages 1–32, 2022.
- EEG-based emotion recognition via efficient convolutional neural network and contrastive learning. IEEE Sensors Journal, 22(20):19608–19619, 2022.
- Effnet: An efficient structure for convolutional neural networks. In IEEE International Conference on Image Processing, pages 6–10, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.