Parametric Augmentation for Time Series Contrastive Learning (2402.10434v1)
Abstract: Modern techniques like contrastive learning have been effectively used in many areas, including computer vision, natural language processing, and graph-structured data. Creating positive examples that assist the model in learning robust and discriminative representations is a crucial stage in contrastive learning approaches. Usually, preset human intuition directs the selection of relevant data augmentations. Due to patterns that are easily recognized by humans, this rule of thumb works well in the vision and language domains. However, it is impractical to visually inspect the temporal structures in time series. The diversity of time series augmentations at both the dataset and instance levels makes it difficult to choose meaningful augmentations on the fly. In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format. We then propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning. The proposed approach is encoder-agnostic, allowing it to be seamlessly integrated with different backbone encoders. Experiments on univariate forecasting tasks demonstrate the highly competitive results of our method, with an average 6.5\% reduction in MSE and 4.7\% in MAE over the leading baselines. In classification tasks, AutoTCL achieves a $1.2\%$ increase in average accuracy.
- Recursive time series data augmentation. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=5lgD4vU-l24s.
- An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
- Spectral temporal graph neural network for multivariate time-series forecasting. Advances in neural information processing systems, 33:17766–17778, 2020.
- A simple framework for contrastive learning of visual representations. In ICML, pp. 1597–1607, 2020.
- Dtw-d: time series semi-supervised learning from a single example. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 383–391, 2013.
- Autoaugment: Learning augmentation strategies from data. In CVPR, pp. 113–123, 2019.
- The ucr time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6):1293–1305, 2019. doi: 10.1109/JAS.2019.1911747.
- UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
- Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In Ambuj Singh, Yizhou Sun, Leman Akoglu, Dimitrios Gunopulos, Xifeng Yan, Ravi Kumar, Fatma Ozcan, and Jieping Ye (eds.), Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 459–469. ACM, 2023. doi: 10.1145/3580305.3599533. URL https://doi.org/10.1145/3580305.3599533.
- Time-series representation learning via temporal and contextual contrasting. In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pp. 2352–2359. ijcai.org, 2021. doi: 10.24963/ijcai.2021/324. URL https://doi.org/10.24963/ijcai.2021/324.
- Self-supervised time series representation learning by inter-intra relational reasoning. arXiv preprint arXiv:2011.13548, 2020.
- Unsupervised scalable representation learning for multivariate time series. Advances in neural information processing systems, 32, 2019.
- Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
- Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
- A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.
- Faster autoaugment: Learning augmentation strategies using backpropagation. In ECCV, pp. 1–16. Springer, 2020.
- Population based augmentation: Efficient learning of augmentation policy schedules. In International conference on machine learning, pp. 2731–2741. PMLR, 2019.
- Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
- Contrastive self-supervised learning for sensor-based human activity recognition. In 2021 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8. IEEE, 2021.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Normalizing flows: An introduction and review of current methods. IEEE transactions on pattern analysis and machine intelligence, 43(11):3964–3979, 2020.
- Modeling long-and short-term temporal patterns with deep neural networks. In SIGIR, pp. 95–104, 2018.
- Metaug: Contrastive learning via meta feature augmentation. In International Conference on Machine Learning, pp. 12964–12978. PMLR, 2022.
- Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In NeurIPS, pp. 5243–5253, 2019.
- Differentiable automatic data augmentation. In Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII, pp. 580–595, Berlin, Heidelberg, 2020. Springer-Verlag. ISBN 978-3-030-58541-9. doi: 10.1007/978-3-030-58542-6_35. URL https://doi.org/10.1007/978-3-030-58542-6_35.
- Petformer: Long-term time series forecasting via placeholder-enhanced transformer. arXiv preprint arXiv:2308.04791, 2023.
- Learning sparse neural networks through l_0 regularization. In International Conference on Learning Representations, 2018.
- Time series contrastive learning with information-aware augmentations. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4534–4542, 2023.
- The concrete distribution: A continuous relaxation of discrete random variables. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
- A time series is worth 64 words: Long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=Jbdc0vTOcol.
- Utilizing expert features for contrastive learning of time-series representations. In International Conference on Machine Learning, pp. 16969–16989. PMLR, 2022.
- Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
- N-beats: Neural basis expansion analysis for interpretable time series forecasting. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1ecqn4YwB.
- Contrastive learning for unsupervised domain adaptation of time series. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=xPkJYRsQGM.
- Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
- Jose C Principe. Information theoretic learning: Renyi’s entropy and kernel perspectives. Springer Science & Business Media, 2010.
- Cadda: Class-wise automatic differentiable data augmentation for eeg signals. In ICLR 2022-International Conference on Learning Representations, 2022.
- Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3):1181–1191, 2020.
- Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
- Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34:15920–15933, 2021.
- Viewmaker networks: Learning views for unsupervised representation learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=enoVQWLsfyL.
- What makes for good views for contrastive learning? Advances in neural information processing systems, 33:6827–6839, 2020.
- The information bottleneck method. arXiv preprint physics/0004057, 2000.
- Unsupervised representation learning for time series with temporal neighborhood coding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=8qDwejCuCN.
- Towards domain-agnostic contrastive learning. In International Conference on Machine Learning, pp. 10530–10541. PMLR, 2021.
- Gc-flow: A graph-based flow network for effective clustering. arXiv preprint arXiv:2305.17284, 2023.
- Learning latent seasonal-trend representations for time series forecasting. In Advances in Neural Information Processing Systems, 2022.
- Time series data augmentation for deep learning: A survey. In AAAI, 2021.
- CoST: Contrastive learning of disentangled seasonal-trend representations for time series forecasting. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=PilZY3omXV2.
- Hallucination improves the performance of unsupervised visual representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16132–16143, 2023.
- Unsupervised data augmentation for consistency training. Advances in neural information processing systems, 33:6256–6268, 2020.
- Infogcl: Information-aware graph contrastive learning. Advances in Neural Information Processing Systems, 34:30414–30425, 2021.
- Unsupervised time-series representation learning with iterative bilinear temporal-spectral fusion. In International Conference on Machine Learning, pp. 25038–25054. PMLR, 2022.
- Autogcl: Automated graph contrastive learning via learnable view generators. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8892–8900, 2022.
- Graph contrastive learning with augmentations. In NeurIPS, pp. 5812–5823, 2020.
- Bringing your own view: Graph contrastive learning without prefabricated data augmentations. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 1300–1309, 2022.
- Dsformer: A double sampling transformer for multivariate time series long-term prediction. arXiv preprint arXiv:2308.03274, 2023.
- Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8980–8987, 2022.
- A transformer-based framework for multivariate time series representation learning. In SIGKDD, pp. 2114–2124, 2021.
- Self-supervised time series representation learning via cross reconstruction transformer. IEEE Transactions on Neural Networks and Learning Systems, 2023a.
- Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in Neural Information Processing Systems, 35:3988–4003, 2022.
- Ctfnet: Long-sequence time-series forecasting based on convolution and time–frequency analysis. IEEE Transactions on Neural Networks and Learning Systems, 2023b.
- Informer: Beyond efficient transformer for long sequence time-series forecasting. In AAAI, 2021.
- Xu Zheng (87 papers)
- Tianchun Wang (19 papers)
- Wei Cheng (175 papers)
- Aitian Ma (3 papers)
- Haifeng Chen (99 papers)
- Mo Sha (6 papers)
- Dongsheng Luo (46 papers)