Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Unsupervised Domain Adaptation for Time Series Classification: a Benchmark (2312.09857v2)

Published 15 Dec 2023 in cs.LG, cs.AI, and stat.ML

Abstract: Unsupervised Domain Adaptation (UDA) aims to harness labeled source data to train models for unlabeled target data. Despite extensive research in domains like computer vision and natural language processing, UDA remains underexplored for time series data, which has widespread real-world applications ranging from medicine and manufacturing to earth observation and human activity recognition. Our paper addresses this gap by introducing a comprehensive benchmark for evaluating UDA techniques for time series classification, with a focus on deep learning methods. We provide seven new benchmark datasets covering various domain shifts and temporal dynamics, facilitating fair and standardized UDA method assessments with state of the art neural network backbones (e.g. Inception) for time series data. This benchmark offers insights into the strengths and limitations of the evaluated approaches while preserving the unsupervised nature of domain adaptation, making it directly applicable to practical problems. Our paper serves as a vital resource for researchers and practitioners, advancing domain adaptation solutions for time series data and fostering innovation in this critical field. The implementation code of this benchmark is available at https://github.com/EricssonResearch/UDA-4-TSC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognition, 43(10):3605–3620, 2010.
  2. A public domain dataset for human activity recognition using smartphones. In Esann, volume 3, pp.  3, 2013.
  3. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data mining and knowledge discovery, 31(3):606–660, 2017.
  4. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006.
  5. Classifying muscle states with one-dimensional radio-frequency signals from single element ultrasound transducers. Sensors, 22(7):2789, 2022.
  6. Time series domain adaptation via sparse associative structure alignment. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  6859–6867, 2021.
  7. Estimating generalization under distribution shifts via domain-invariant representations. Proceedings of Machine Learning Research, 119, 2020.
  8. A recurrent latent variable model for sequential data. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper_files/paper/2015/file/b618c3210e934362ac261db280128c22-Paper.pdf.
  9. Optimal transport for domain adaptation. CoRR, abs/1507.00504, 2015. URL http://arxiv.org/abs/1507.00504.
  10. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pp.  447–463, 2018.
  11. The ucr time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6):1293–1305, 2019.
  12. Janez Demšar. Statistical comparisons of classifiers over multiple data sets. Machine Learning Research, 7:1–30, 2006.
  13. Contrastive domain adaptation for time-series via temporal mixup. IEEE Transactions on Artificial Intelligence, pp.  1–10, 2023. doi: 10.1109/tai.2023.3293473. URL https://doi.org/10.1109%2Ftai.2023.3293473.
  14. A brief review of domain adaptation. Advances in data science and information engineering: proceedings from ICDATA 2020 and IKE 2020, pp.  877–894, 2021.
  15. Milton Friedman. A comparison of alternative tests of significance for the problem of m rankings. The annals of mathematical statistics, 11(1):86–92, 1940.
  16. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp.  1180–1189. PMLR, 2015.
  17. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  18. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
  19. Domain adaptation for time series under feature and label shifts. In International Conference on Machine Learning, 2023.
  20. Transfer learning for time series classification. In 2018 IEEE international conference on big data (Big Data), pp.  1367–1376. IEEE, 2018.
  21. Deep learning for time series classification: a review. Data mining and knowledge discovery, 33(4):917–963, 2019.
  22. Inceptiontime: Finding alexnet for time series classification. Data Min. Knowl. Discov., 34(6):1936–1962, nov 2020. ISSN 1384-5810. doi: 10.1007/s10618-020-00710-y. URL https://doi.org/10.1007/s10618-020-00710-y.
  23. Pazoe: classifying time series with few labels. In 2021 29th European Signal Processing Conference (EUSIPCO), pp.  1561–1565. IEEE, 2021.
  24. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6114.
  25. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter, 12(2):74–82, 2011.
  26. Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data set for data-driven classification. In PHM Society European Conference, volume 3, 2016.
  27. Transfer learning in computer vision tasks: Remember where you come from. Image and Vision Computing, 93:103853, 2020.
  28. Time series classification with hive-cote: The hierarchical vote collective of transformation-based ensembles. ACM Trans. Knowl. Discov. Data, 12(5), jul 2018. ISSN 1556-4681. doi: 10.1145/3182382. URL https://doi.org/10.1145/3182382.
  29. Learning transferable features with deep adaptation networks. In International conference on machine learning, pp.  97–105. PMLR, 2015.
  30. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018.
  31. Unsupervised domain adaptation: A reality check. arXiv preprint arXiv:2111.15672, 2021.
  32. Three new validators and a large-scale benchmark ranking for unsupervised domain adaptation. ArXiv, abs/2208.07360, 2022.
  33. Timematch: Unsupervised cross-region adaptation by temporal shift estimation. ISPRS Journal of Photogrammetry and Remote Sensing, 188:301–313, 2022.
  34. Pmlb: a large benchmark suite for machine learning evaluation and comparison. BioData Mining, 10(1):36, Dec 2017. ISSN 1756-0381. doi: 10.1186/s13040-017-0154-4. URL https://doi.org/10.1186/s13040-017-0154-4.
  35. Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens. In International Journal on Document Analysis and Recognition (IJDAR), September 2022. doi: 10.1007/s10032-022-00415-6.
  36. Predicting multi-antenna frequency-selective channels via meta-learned linear filters based on long-short term channel decomposition. arXiv preprint arXiv:2203.12715, 2022.
  37. Variational recurrent adversarial deep domain adaptation. In International Conference on Learning Representations, 2016. URL https://api.semanticscholar.org/CorpusID:51837620.
  38. Adatime: A benchmarking suite for domain adaptation on time series data. ACM Transactions on Knowledge Discovery from Data, 17(8):1–18, 2023.
  39. Adaptiope: A modern benchmark for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp.  101–110, 2021.
  40. Adversarial branch architecture search for unsupervised domain adaptation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp.  2918–2928, 2022.
  41. Transfer learning in natural language processing. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Tutorials, pp.  15–18, 2019.
  42. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  3723–3732, 2018.
  43. Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp.  9164–9173. IEEE Computer Society, 2021.
  44. Exploring approaches for heterogeneous transfer learning in dynamic networks. In NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, pp.  1–9. IEEE, 2022.
  45. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM conference on embedded networked sensor systems, pp.  127–140, 2015.
  46. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(5), 2007.
  47. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  7167–7176, 2017.
  48. Ptb-xl, a large publicly available electrocardiography dataset. Scientific data, 7(1):154, 2020.
  49. Adaptive transfer kernel learning for transfer gaussian process regression. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  50. Multi-source deep domain adaptation with weak supervision for time-series sensor data. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, pp.  1768–1778, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450379984. doi: 10.1145/3394486.3403228. URL https://doi.org/10.1145/3394486.3403228.
  51. Video unsupervised domain adaptation with deep learning: A comprehensive survey. arXiv preprint arXiv:2211.10412, 2022.
  52. Towards accurate model selection in deep unsupervised domain adaptation. In International Conference on Machine Learning, pp.  7124–7133. PMLR, 2019.
  53. Machine learning and deep learning algorithms for bearing fault diagnostics-a comprehensive review. arxiv preprints. arXiv preprint arXiv:1901.08247, 2019.
  54. A review of single-source deep unsupervised visual domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, 33(2):473–493, 2020.
  55. Cross validation framework to choose amongst models and datasets for transfer learning. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III 21, pp.  547–562. Springer, 2010.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hassan Ismail Fawaz (12 papers)
  2. Ganesh Del Grosso (4 papers)
  3. Tanguy Kerdoncuff (2 papers)
  4. Aurelie Boisbunon (2 papers)
  5. Illyyne Saffar (2 papers)
Citations (1)

Summary

  • The paper establishes a benchmark by introducing seven diverse time series datasets to evaluate deep unsupervised domain adaptation methods.
  • It employs techniques like DANN, CDAN, OTDA, and layer transfer to realign models for improved performance across domains.
  • The study demonstrates that effective hyperparameter tuning, particularly using target risk, substantially boosts classifier performance over architecture choices.

Overview of Unsupervised Domain Adaptation for Time Series Classification

Unsupervised Domain Adaptation (UDA) is a critical subfield in machine learning, where the goal is to enable a model, trained on a labeled dataset from one domain, to perform well on a different, unlabeled domain that shares some similarities with the training domain. While UDA is well-studied in certain areas like Natural Language Processing and image recognition, it isn't as explored in the domain of time series classification—which is used in a variety of applications such as healthcare, finance, and activity recognition.

A paper fills this gap by establishing a benchmark for UDA in time series classification, which aims to provide a standardized way to evaluate new UDA methods. To facilitate this, the researchers introduced seven new datasets to the community that vary in domain shifts and temporal dynamics.

Domain Adaptation Techniques

The paper discusses several strategies:

  1. Transfer of specific layers from Neural Networks, where certain layers trained on the source domain are reused in the target domain.
  2. Domain-Adversarial Neural Network (DANN) seeks to minimize domain differences through adversarial training.
  3. Conditional Adversarial Domain Adaptation (CDAN) aligns conditional distributions to achieve domain invariance.
  4. Optimal Transport Domain Adaptation (OTDA) uses transport theory to align source and target distributions.

These methodologies attempt to realign the models to work well with the target domain that lacks labels, addressing issues arising from distribution shifts between the training and target domains.

Hyperparameter Tuning without Labels

A notable challenge in UDA is ensuring the model is well-tuned in the absence of labeled target data, which is a common practice in supervised learning. The researchers investigate three standard approaches:

  • Target Risk: Involves using (at least some) target labels, serving more as an upper bound on performance.
  • Source Risk: Relies on empirical risk based on source labels.
  • Importance Weighted Cross-Validation (IWCV): Considers the potential shift between source and target domain distributions.

Experimentation and Findings

The experimental framework for assessing UDA methods involved using various hyperparameter tuning techniques and deep UDA algorithms, equipped with advanced architectures like InceptionTime's backbone.

The results demonstrate that the right choice of hyperparameter tuning method, such as the Target Risk approach, can lead to a significant improvement in classifier performance. Additionally, the paper discovers that while the choice of neural network architecture is crucial, the UDA technique itself seems to have a more notable impact on performance.

The Implications of the Study

Setting up this benchmark and proposing new datasets paves the way for more rigorous and fair evaluation of UDA approaches in time series data. With the provision of standardized datasets and methods, future work can build upon these findings to develop more sophisticated UDA algorithms, ultimately advancing the field and its applications.

These advancements could prove invaluable in real-world scenarios where acquiring labeled data is costly or impractical, such as in various sensor-based monitoring systems or patient health tracking where the model deployment environment is ever-changing. The paper emphasizes the ever-present need for domain adaptation solutions in dynamic settings that require machine learning models to be adaptive and resilient to changes in data distribution.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com