Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Learning for Time Series Classification and Extrinsic Regression: A Current Survey (2302.02515v2)

Published 6 Feb 2023 in cs.LG, cs.AI, and cs.CV

Abstract: Time Series Classification and Extrinsic Regression are important and challenging machine learning tasks. Deep learning has revolutionized natural language processing and computer vision and holds great promise in other fields such as time series analysis where the relevant features must often be abstracted from the raw data but are not known a priori. This paper surveys the current state of the art in the fast-moving field of deep learning for time series classification and extrinsic regression. We review different network architectures and training methods used for these tasks and discuss the challenges and opportunities when applying deep learning to time series data. We also summarize two critical applications of time series classification and extrinsic regression, human activity recognition and satellite earth observation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (320)
  1. Q. Yang and X. Wu, “10 challenging problems in data mining research,” Int. J. Inf. Tech. & Decision Making, vol. 5, no. 04, pp. 597–604, 2006.
  2. P. Esling and C. Agon, “Time-series data mining,” ACM Computing Surveys (CSUR), vol. 45, no. 1, pp. 1–34, 2012.
  3. H. F. Nweke, Y. W. Teh, M. A. Al-Garadi, and U. R. Alo, “Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges,” Expert Systems with Applications, vol. 105, pp. 233–261, 2018.
  4. J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, “Deep learning for sensor-based activity recognition: A survey,” Pattern recognition letters, vol. 119, pp. 3–11, 2019.
  5. K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,” ACM Computing Surveys (CSUR), vol. 54, no. 4, pp. 1–40, 2021.
  6. R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human brain mapping, vol. 38, no. 11, pp. 5391–5420, 2017.
  7. A. Rajkomar, E. Oren, K. Chen, A. M. Dai, N. Hajaj, M. Hardt, P. J. Liu, X. Liu, J. Marcus, M. Sun et al., “Scalable and accurate deep learning with electronic health records,” NPJ digital medicine, vol. 1, no. 1, pp. 1–10, 2018.
  8. A. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. Keogh, “The UEA multivariate time series classification archive, 2018,” arXiv preprint:1811.00075, 2018.
  9. H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh, “The UCR time series archive,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1293–1305, 2019.
  10. C. W. Tan, C. Bergmeir, F. Petitjean, and G. I. Webb, “Time series extrinsic regression,” Data Min. Knowl. Discov., vol. 35, no. 3, pp. 1032–1060, 2021.
  11. M. Middlehurst, P. Schäfer, and A. Bagnall, “Bake off redux: a review and experimental evaluation of recent time series classification algorithms,” arXiv preprint arXiv:2304.13029, 2023.
  12. H. I. Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P.-A. Muller, and F. Petitjean, “Inceptiontime: Finding alexnet for time series classification,” Data Min. Knowl. Discov., vol. 34, no. 6, pp. 1936–1962, 2020.
  13. N. M. Foumani, C. W. Tan, G. I. Webb, and M. Salehi, “Improving position encoding of transformers for multivariate time series classification,” Data Min. Knowl. Discov., Sep 2023.
  14. A. Dempster, F. Petitjean, and G. I. Webb, “ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels,” Data Min. Knowl. Discov., vol. 34, no. 5, pp. 1454–1495, 2020.
  15. H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep learning for time series classification: a review,” Data Min Knowl Discov, vol. 33, no. 4, pp. 917–963, 2019.
  16. Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” in 2017 International joint conference on neural networks (IJCNN).   IEEE, 2017, pp. 1578–1585.
  17. Q. Wen, T. Zhou, C. Zhang, W. Chen, Z. Ma, J. Yan, and L. Sun, “Transformers in time series: A survey,” arXiv preprint:2202.07125, 2022.
  18. Y. Hao and H. Cao, “A new attention mechanism to classify multivariate time series,” in 29th Int. Joint Conf. Artificial Intelligence, 2020.
  19. G. Zerveas, S. Jayaraman, D. Patel, A. Bhamidipaty, and C. Eickhoff, “A transformer-based framework for multivariate time series representation learning,” in 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 2114–2124.
  20. X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” IEEE transactions on knowledge and data engineering, vol. 35, no. 1, pp. 857–876, 2021.
  21. E. Eldele, M. Ragab, Z. Chen, M. Wu, C. K. Kwoh, X. Li, and C. Guan, “Time-series representation learning via temporal and contextual contrasting,” in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, 2021, pp. 2352–2359.
  22. C.-H. H. Yang, Y.-Y. Tsai, and P.-Y. Chen, “Voice2series: Reprogramming acoustic models for time series classification,” in Int. conf. mach. learn.   PMLR, 2021, pp. 11 808–11 819.
  23. Z. Yue, Y. Wang, J. Duan, T. Yang, C. Huang, Y. Tong, and B. Xu, “Ts2vec: Towards universal representation of time series,” in AAAI, vol. 36, no. 8, 2022, pp. 8980–8987.
  24. N. M. Foumani, C. W. Tan, G. I. Webb, and M. Salehi, “Series2vec: Similarity-based self-supervised representation learning for time series classification,” arXiv preprint arXiv:2312.03998, 2023.
  25. A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data Min. Knowl. Discov., vol. 31, no. 3, pp. 606–660, 2017.
  26. A. P. Ruiz, M. Flynn, J. Large, M. Middlehurst, and A. Bagnall, “The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data Min. Knowl. Discov., pp. 1–49, 2020.
  27. C. W. Tan, C. Bergmeir, F. Petitjean, and G. I. Webb, “Monash University, UEA, UCR time series regression archive,” arXiv preprint:2006.10996, 2020.
  28. M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters, vol. 42, pp. 11–24, 2014.
  29. Y. Bengio, L. Yao, G. Alain, and P. Vincent, “Generalized denoising auto-encoders as generative models,” Advances neural inf. process. syst., vol. 26, 2013.
  30. Q. Hu, R. Zhang, and Y. Zhou, “Transfer learning for short-term wind speed prediction with deep neural networks,” Renewable Energy, vol. 85, pp. 83–95, 2016.
  31. J. Serrà, S. Pascual, and A. Karatzoglou, “Towards a universal neural network encoder for time series.” in CCIA, 2018, pp. 120–129.
  32. D. Banerjee, K. Islam, K. Xue, G. Mei, L. Xiao, G. Zhang, R. Xu, C. Lei, S. Ji, and J. Li, “A deep transfer learning approach for improved post-traumatic stress disorder diagnosis,” Knowledge and Information Systems, vol. 60, no. 3, pp. 1693–1724, 2019.
  33. W. Aswolinskiy, R. F. Reinhart, and J. Steil, “Time series classification in reservoir-and model-space,” Neural Processing Letters, vol. 48, no. 2, pp. 789–809, 2018.
  34. E. Brophy, Z. Wang, Q. She, and T. Ward, “Generative adversarial networks in time series: A systematic literature review,” ACM Comput. Surv., vol. 55, no. 10, feb 2023. [Online]. Available: https://doi.org/10.1145/3559540
  35. F. A. Del Campo, M. C. G. Neri, O. O. V. Villegas, V. G. C. Sánchez, H. d. J. O. Domínguez, and V. G. Jiménez, “Auto-adaptive multilayer perceptron for univariate time series classification,” Expert Systems with Applications, vol. 181, p. 115147, 2021.
  36. B. K. Iwana, V. Frinken, and S. Uchida, “A robust dissimilarity-based neural network for temporal pattern recognition,” in 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR).   IEEE, 2016, pp. 265–270.
  37. ——, “DTW-NN: A novel neural network for time series recognition using dynamic alignment between inputs and weights,” Knowledge-Based Systems, vol. 188, p. 104971, 2020.
  38. N. Tabassum, S. Menon, and A. Jastrzebska, “Time-series classification with safe: Simple and fast segmented word embedding-based neural time series classifier,” Information Processing & Management, vol. 59, no. 5, p. 103044, 2022.
  39. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances neural inf. process. syst., vol. 25, pp. 1097–1105, 2012.
  40. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai et al., “Recent advances in convolutional neural networks,” Pattern recognition, vol. 77, pp. 354–377, 2018.
  41. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  42. Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, “Time series classification using multi-channels deep convolutional neural networks,” in International Conference on Web-Age Information Management.   Springer, 2014, pp. 298–310.
  43. J. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multichannel time series for human activity recognition,” in Twenty-fourth international joint conference on artificial intelligence, 2015.
  44. B. Zhao, H. Lu, S. Chen, J. Liu, and D. Wu, “Convolutional neural networks for time series classification,” Journal of Systems Engineering and Electronics, vol. 28, no. 1, pp. 162–169, 2017.
  45. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE conf. comp. vision patt. recognit., 2015, pp. 3431–3440.
  46. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE conf. comp. vision patt. recognit., 2016, pp. 770–778.
  47. B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in IEEE conf. comp. vision patt. recognit., 2016, pp. 2921–2929.
  48. X. Zou, Z. Wang, Q. Li, and W. Sheng, “Integration of residual network and convolutional neural network along with various activation functions and global pooling for time series classification,” Neurocomputing, vol. 367, pp. 39–45, 2019.
  49. Y. Li, X. Zhang, and D. Chen, “Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes,” in IEEE conf. comp. vision patt. recognit., 2018, pp. 1091–1100.
  50. O. Yazdanbakhsh and S. Dick, “Multivariate time series classification using dilated convolutional neural network,” arXiv preprint:1905.01697, 2019.
  51. S. N. M. Foumani, C. W. Tan, and M. Salehi, “Disjoint-cnn for multivariate time series classification,” in 2021 Int. Conf. Data Min. Workshops (ICDMW).   IEEE, 2021, pp. 760–769.
  52. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNet-V2: Inverted residuals and linear bottlenecks,” in IEEE conf. comp. vision patt. recognit., 2018, pp. 4510–4520.
  53. Z. Wang and T. Oates, “Encoding time series as images for visual inspection and classification using tiled convolutional neural networks,” in Workshops at the twenty-ninth AAAI conference on artificial intelligence, 2015.
  54. N. Hatami, Y. Gavet, and J. Debayle, “Classification of time-series images using deep convolutional neural networks,” in Tenth international conference on machine vision (ICMV 2017), vol. 10696.   SPIE, 2018, pp. 242–249.
  55. S. Karimi-Bidhendi, F. Munshi, and A. Munshi, “Scalable classification of univariate and multivariate time series,” in 2018 IEEE International Conference on Big Data (Big Data).   IEEE, 2018, pp. 1598–1605.
  56. Y. Zhao and Z. Cai, “Classify multivariate time series by deep neural network image classification,” in 2019 2nd China Symposium on Cognitive Computing and Hybrid Intelligence (CCHI).   IEEE, 2019, pp. 93–98.
  57. C.-L. Yang, Z.-X. Chen, and C.-Y. Yang, “Sensor classification using convolutional neural network by encoding multivariate time series as two-dimensional colored images,” Sensors, vol. 20, no. 1, p. 168, 2019.
  58. J.-P. E. S. O. Kamphorst, D. Ruelle et al., “Recurrence plots of dynamical systems,” Europhysics Letters, vol. 4, no. 9, p. 17, 1987.
  59. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in IEEE conf. comp. vision patt. recognit., 2016, pp. 2818–2826.
  60. W. Chen and K. Shi, “A deep learning framework for time series classification using relative position matrix and convolutional neural network,” Neurocomputing, vol. 359, pp. 384–394, 2019.
  61. Z. Cui, W. Chen, and Y. Chen, “Multi-scale convolutional neural networks for time series classification,” arXiv preprint:1603.06995, 2016.
  62. A. Le Guennec, S. Malinowski, and R. Tavenard, “Data augmentation for time series classification using convolutional neural networks,” in ECML/PKDD workshop on advanced analytics and learning on temporal data, 2016.
  63. C.-L. Liu, W.-H. Hsaio, and Y.-C. Tu, “Time series classification with multivariate convolutional neural network,” IEEE Transactions on Industrial Electronics, vol. 66, no. 6, pp. 4788–4797, 2018.
  64. A. Brunel, J. Pasquet, J. PASQUET, N. Rodriguez, F. Comby, D. Fouchez, and M. Chaumont, “A cnn adapted to time series for the classification of supernovae,” Electronic imaging, vol. 2019, no. 14, pp. 90–1, 2019.
  65. J. Sun, S. Takeuchi, and I. Yamasaki, “Prototypical inception network with cross branch attention for time series classification,” in 2021 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2021, pp. 1–7.
  66. S. Usmankhujaev, B. Ibrokhimov, S. Baydadaev, and J. Kwon, “Time series classification with inceptionfcn,” Sensors, vol. 22, no. 1, p. 157, 2021.
  67. X. Gong, Y.-W. Si, Y. Tian, C. Lin, X. Zhang, and X. Liu, “Kdctime: Knowledge distillation with calibration on inceptiontime for time-series classification,” Inf. Sci., vol. 613, pp. 184–203, 2022.
  68. A. Ismail-Fawaz, M. Devanne, S. Berretti, J. Weber, and G. Forestier, “Lite: Light inception with boosting techniques for time series classification,” in 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA).   IEEE, 2023, pp. 1–10.
  69. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE conf. comp. vision patt. recognit., 2015, pp. 1–9.
  70. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-first AAAI conference on artificial intelligence, 2017.
  71. M. Ronald, A. Poulose, and D. S. Han, “isplinception: an inception-resnet deep learning architecture for human activity recognition,” IEEE Access, vol. 9, pp. 68 985–69 001, 2021.
  72. A. Ismail-Fawaz, M. Devanne, J. Weber, and G. Forestier, “Deep learning for time series classification using new hand-crafted convolution filters,” in 2022 IEEE International Conference on Big Data (Big Data).   IEEE, 2022, pp. 972–981.
  73. M. Hüsken and P. Stagge, “Recurrent neural networks for time series classification,” Neurocomputing, vol. 50, pp. 223–235, 2003.
  74. D. Dennis, D. A. E. Acar, V. Mandikal, V. S. Sadasivan, V. Saligrama, H. V. Simhadri, and P. Jain, “Shallow rnn: accurate time-series classification on resource constrained devices,” Advances neural inf. process. syst., vol. 32, 2019.
  75. S. Fernández, A. Graves, and J. Schmidhuber, “Sequence labelling in structured domains with hierarchical recurrent neural networks,” in 20th International Joint Conference on Artificial Intelligence, IJCAI 2007, 2007.
  76. M. Hermans and B. Schrauwen, “Training and analysing deep recurrent neural networks,” Advances neural inf. process. syst., vol. 26, 2013.
  77. R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of training recurrent neural networks,” in Int. conf. mach. learn.   PMLR, 2013, pp. 1310–1318.
  78. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  79. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint:1412.3555, 2014.
  80. I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” Advances neural inf. process. syst., vol. 27, 2014.
  81. J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in IEEE conf. comp. vision patt. recognit., 2015, pp. 2625–2634.
  82. A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” in IEEE conf. comp. vision patt. recognit., 2015, pp. 3128–3137.
  83. Y. Tang, J. Xu, K. Matsumoto, and C. Ono, “Sequence-to-sequence model with attention for time series classification,” in 2016 IEEE 16th Int. Conf. Data Min. Workshops (ICDMW).   IEEE, 2016, pp. 503–510.
  84. P. Malhotra, V. TV, L. Vig, P. Agarwal, and G. Shroff, “Timenet: Pre-trained deep recurrent neural network for time series classification,” arXiv preprint:1706.08838, 2017.
  85. F. Karim, S. Majumdar, H. Darabi, and S. Harford, “Multivariate LSTM-FCNs for time series classification,” Neural Networks, vol. 116, pp. 237–245, 2019.
  86. X. Zhang, Y. Gao, J. Lin, and C.-T. Lu, “Tapnet: Multivariate time series classification with attentional prototypical network,” in AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 6845–6852.
  87. J. Zuo, K. Zeitouni, and Y. Taher, “Smate: Semi-supervised spatio-temporal representation learning on multivariate time series,” in 2021 IEEE International Conference on Data Mining (ICDM).   IEEE, 2021, pp. 1565–1570.
  88. F. Karim, S. Majumdar, H. Darabi, and S. Chen, “LSTM fully convolutional networks for time series classification,” IEEE access, vol. 6, pp. 1662–1669, 2017.
  89. S. Lin and G. C. Runger, “Gcrnn: Group-constrained convolutional recurrent neural network,” IEEE transactions on neural networks and learning systems, vol. 29, no. 10, pp. 4709–4718, 2017.
  90. R. Mutegeki and D. S. Han, “A CNN-LSTM approach to human activity recognition,” in 2020 IEEE Int. Conf. Comput. Intell. Commun. Technol. (ICAIIC).   IEEE, 2020, pp. 362–366.
  91. R. Pascanu, T. Mikolov, and Y. Bengio, “Understanding the exploding gradient problem,” CoRR, abs/1211.5063, vol. 2, no. 417, p. 1, 2012.
  92. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances neural inf. process. syst., vol. 30, 2017.
  93. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of NAACL-HLT 2019, vol. 1.   Stroudsburg, PA, USA: Association for Computational Linguistics, 2019, pp. 4171–4186.
  94. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint:2010.11929, 2020.
  95. S. Li, X. Jin, Y. Xuan, X. Zhou, W. Chen, Y.-X. Wang, and X. Yan, “Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting,” Advances neural inf. process. syst., vol. 32, 2019.
  96. H. Zhou, S. Zhang, J. Peng, S. Zhang, J. Li, H. Xiong, and W. Zhang, “Informer: Beyond efficient transformer for long sequence time-series forecasting,” in Proceedings of AAAI, 2021.
  97. D. Kostas, S. Aroca-Ouellette, and F. Rudzicz, “Bendr: using transformers and a contrastive self-supervised learning task to learn from massive amounts of eeg data,” Frontiers in Human Neuroscience, vol. 15, 2021.
  98. Y. Yuan, G. Xun, F. Ma, Y. Wang, N. Du, K. Jia, L. Su, and A. Zhang, “Muvan: A multi-view attention network for multivariate temporal data,” in 2018 IEEE International Conference on Data Mining (ICDM).   IEEE, 2018, pp. 717–726.
  99. T.-Y. Hsieh, S. Wang, Y. Sun, and V. Honavar, “Explainable multivariate time series classification: A deep neural network which learns to attend to important variables as well as time intervals,” in 14th ACM International Conference on Web Search and Data Mining, 2021, pp. 607–615.
  100. W. Chen and K. Shi, “Multi-scale attention convolutional neural network for time series classification,” Neural Networks, vol. 136, pp. 126–140, 2021.
  101. Y. Yuan, G. Xun, F. Ma, Q. Suo, H. Xue, K. Jia, and A. Zhang, “A novel channel-aware attention framework for multi-channel eeg seizure detection via multi-view deep learning,” in 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI).   IEEE, 2018, pp. 206–209.
  102. Y. Liang, S. Ke, J. Zhang, X. Yi, and Y. Zheng, “Geoman: Multi-level attention networks for geo-sensory time series prediction,” in IJCAI, vol. 2018, 2018, pp. 3428–3434.
  103. J. Hu and W. Zheng, “Multistage attention network for multivariate time series prediction,” Neurocomputing, vol. 383, pp. 122–137, 2020.
  104. X. Cheng, P. Han, G. Li, S. Chen, and H. Zhang, “A novel channel and temporal-wise attention in convolutional networks for multivariate time series classification,” IEEE Access, vol. 8, pp. 212 247–212 257, 2020.
  105. Z. Xiao, X. Xu, H. Xing, S. Luo, P. Dai, and D. Zhan, “Rtfn: a robust temporal feature network for time series classification,” Inf. Sci., vol. 571, pp. 65–86, 2021.
  106. J. Wang, C. Yang, X. Jiang, and J. Wu, “When: A wavelet-dtw hybrid attention network for heterogeneous time series analysis,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023, pp. 2361–2373.
  107. M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spatial transformer networks,” Advances neural inf. process. syst., vol. 28, 2015.
  108. S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in European conference on computer vision, 2018, pp. 3–19.
  109. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in IEEE conf. comp. vision patt. recognit., 2018, pp. 7132–7141.
  110. T. Wang, Z. Liu, T. Zhang, and Y. Li, “Time series classification based on multi-scale dynamic convolutional features and distance features,” in 2021 2nd Asia Symposium on Signal Processing (ASSP).   IEEE, 2021, pp. 239–246.
  111. H. Song, D. Rajan, J. Thiagarajan, and A. Spanias, “Attend and diagnose: Clinical time series analysis using attention models,” in AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  112. C.-c. Jin and X. Chen, “An end-to-end framework combining time–frequency expert knowledge and modified transformer networks for vibration signal classification,” Expert Systems with Applications, vol. 171, p. 114570, 2021.
  113. T. Allam Jr and J. D. McEwen, “Paying attention to astronomical transients: Photometric classification with the time-series transformer,” arXiv preprint:2105.06178, 2021.
  114. M. Liu, S. Ren, S. Ma, J. Jiao, Y. Chen, Z. Wang, and W. Song, “Gated transformer networks for multivariate time series classification,” arXiv preprint:2103.14438, 2021.
  115. B. Zhao, H. Xing, X. Wang, F. Song, and Z. Xiao, “Rethinking attention mechanism in time series classification,” arXiv preprint:2207.07564, 2022.
  116. Y. Ren, L. Li, X. Yang, and J. Zhou, “Autotransformer: Automatic transformer architecture design for time series classification,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining.   Springer, 2022, pp. 143–155.
  117. M. Jin, H. Y. Koh, Q. Wen, D. Zambon, C. Alippi, G. I. Webb, I. King, and S. Pan, “A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection,” arXiv, vol. 14, no. 8, pp. 1–27, jul 2023. [Online]. Available: http://arxiv.org/abs/2307.03759
  118. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A Comprehensive Survey on Graph Neural Networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4–24, jan 2021.
  119. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The Graph Neural Network Model,” IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 61–80, jan 2009.
  120. W. Xi, A. Jain, L. Zhang, and J. Lin, “LB-SimTSC: An Efficient Similarity-Aware Graph Neural Network for Semi-Supervised Time Series Classification,” arXiv, jan 2023.
  121. H. Liu, X. Liu, D. Yang, Z. Liang, H. Wang, Y. Cui, and J. Gu, “TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time Series Classification,” arXiv, vol. XX, no. Xx, pp. 1–10, apr 2023.
  122. S. Bloemheuvel, J. van den Hoogen, D. Jozinović, A. Michelini, and M. Atzmueller, “Graph neural networks for multivariate time series regression with application to seismic data,” International Journal of Data Science and Analytics, vol. 16, no. 3, pp. 317–332, sep 2023.
  123. Z. Cheng, Y. Yang, S. Jiang, W. Hu, Z. Ying, Z. Chai, and C. Wang, “Time2Graph+: Bridging Time Series and Graph Representation Learning via Multiple Attentions,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 2, pp. 1–1, 2021.
  124. I. C. Covert, B. Krishnan, I. Najm, J. Zhan, M. Shore, J. Hixson, and M. J. Po, “Temporal graph convolutional networks for automatic seizure detection,” in Machine Learning for Healthcare Conference.   PMLR, 2019, pp. 160–180.
  125. T. Song, W. Zheng, P. Song, and Z. Cui, “EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks,” IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532–541, jul 2020.
  126. Z. Jia, Y. Lin, J. Wang, R. Zhou, X. Ning, Y. He, and Y. Zhao, “Graphsleepnet: Adaptive spatial-temporal graph convolutional networks for sleep stage classification.” in IJCAI, 2020, pp. 1324–1330.
  127. Z. Ma, G. Mei, E. Prezioso, Z. Zhang, and N. Xu, “A deep learning approach using graph convolutional networks for slope deformation prediction based on time-series displacement data,” Neural Computing and Applications, vol. 33, no. 21, pp. 14 441–14 457, 2021.
  128. T. Li, Z. Zhao, C. Sun, R. Yan, and X. Chen, “Multireceptive field graph convolutional networks for machine fault diagnosis,” IEEE Transactions on Industrial Electronics, vol. 68, no. 12, pp. 12 739–12 749, 2020.
  129. D. Nhu, M. Janmohamed, P. Perucca, A. Gilligan, P. Kwan, T. O’Brien, C. Tan, and L. Kuhlmann, “Graph convolutional network for generalized epileptiform abnormality detection on eeg,” in 2021 IEEE Signal Processing in Medicine and Biology Symposium (SPMB).   IEEE, 2021, pp. 1–6.
  130. S. Tang, J. A. Dunnmon, K. Saab, X. Zhang, Q. Huang, F. Dubost, D. L. Rubin, and C. Lee-Messer, “Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis,” ICLR 2022 - 10th Int. Conf. Learning Representations, pp. 1–23, apr 2021.
  131. X. Zhang, M. Zeman, T. Tsiligkaridis, and M. Zitnik, “Graph-Guided Network for Irregularly Sampled Multivariate Time Series,” ICLR 2022 - 10th International Conference on Learning Representations, pp. 1–21, oct 2021.
  132. A. M. Censi, D. Ienco, Y. J. E. Gbodjo, R. G. Pensa, R. Interdonato, and R. Gaetano, “Attentive spatial temporal graph CNN for land cover mapping from multi temporal remote sensing data,” IEEE Access, vol. 9, pp. 23 070–23 082, 2021.
  133. T. Azevedo, A. Campbell, R. Romero-Garcia, L. Passamonti, R. A. Bethlehem, P. Liò, and N. Toschi, “A deep graph neural network architecture for modelling spatio-temporal dynamics in resting-state functional MRI data,” Medical Image Analysis, vol. 79, p. 102471, jul 2022.
  134. Z. Duan, H. Xu, Y. Wang, Y. Huang, A. Ren, Z. Xu, Y. Sun, and W. Wang, “Multivariate time-series classification with hierarchical variational graph pooling,” Neural Networks, vol. 154, pp. 481–490, oct 2022.
  135. D. Zha, K.-h. Lai, K. Zhou, and X. Hu, “Towards Similarity-Aware Time-Series Classification,” in Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), Philadelphia, PA, jan 2022, pp. 199–207.
  136. L. Tulczyjew, M. Kawulok, N. Longepe, B. Le Saux, and J. Nalepa, “Graph Neural Networks Extract High-Resolution Cultivated Land Maps From Sentinel-2 Image Series,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
  137. L. Sun, C. Li, B. Liu, and Y. Zhang, “Class-driven Graph Attention Network for Multi-label Time Series Classification in Mobile Health Digital Twins,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 10, pp. 3267–3278, 2023.
  138. C. Dufourg, C. Pelletier, S. May, and S. Lefèvre, “Graph Dynamic Earth Net: Spatio-Temporal Graph Benchmark for Satellite Image Time Series,” in IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, jul 2023, pp. 7164–7167.
  139. E. Keogh and C. A. Ratanamahatana, “Exact indexing of dynamic time warping,” Knowl. Inform. Systems, vol. 7, no. 3, pp. 358–386, 2005.
  140. T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, pp. 1–14, sep 2016.
  141. L. Yang and S. Hong, “Unsupervised time-series representation learning with iterative bilinear temporal-spectral fusion,” in ICML, 2022, pp. 25 038–25 054.
  142. A. Hyvarinen and H. Morioka, “Unsupervised feature extraction by time-contrastive learning and nonlinear ica,” Advances in neural information processing systems, vol. 29, 2016.
  143. J.-Y. Franceschi, A. Dieuleveut, and M. Jaggi, “Unsupervised scalable representation learning for multivariate time series,” NeurIPS, vol. 32, 2019.
  144. S. Tonekaboni, D. Eytan, and A. Goldenberg, “Unsupervised representation learning for time series with temporal neighborhood coding,” arXiv preprint arXiv:2106.00750, 2021.
  145. K. Wickstrøm, M. Kampffmeyer, K. Ø. Mikalsen, and R. Jenssen, “Mixing up contrastive learning: Self-supervised representation learning for time series,” Pattern Recognition Letters, vol. 155, pp. 54–61, 2022.
  146. X. Yang, Z. Zhang, and R. Cui, “Timeclr: A self-supervised contrastive learning framework for univariate time series representation,” Knowledge-Based Systems, vol. 245, p. 108606, 2022.
  147. X. Zhang, Z. Zhao, T. Tsiligkaridis, and M. Zitnik, “Self-supervised contrastive pre-training for time series via time-frequency consistency,” in Proceedings of Neural Information Processing Systems, NeurIPS, 2022.
  148. Q. Meng, H. Qian, Y. Liu, L. Cui, Y. Xu, and Z. Shen, “Mhccl: masked hierarchical cluster-wise contrastive learning for multivariate time series,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 8, 2023, pp. 9153–9161.
  149. R. R. Chowdhury, X. Zhang, J. Shang, R. K. Gupta, and D. Hong, “Tarnet: Task-aware reconstruction for time-series transformer,” in 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 2022, pp. 14–18.
  150. M. Cheng, Q. Liu, Z. Liu, H. Zhang, R. Zhang, and E. Chen, “Timemae: Self-supervised representations of time series with decoupled masked autoencoders,” arXiv preprint arXiv:2303.00320, 2023.
  151. W. Zhang, L. Yang, S. Geng, and S. Hong, “Self-supervised time series representation learning via cross reconstruction transformer,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  152. A. Ismail-Fawaz, M. Devanne, S. Berretti, J. Weber, and G. Forestier, “Finding foundation models for time series classification with a pretext task,” arXiv preprint arXiv:2311.14534, 2023.
  153. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of big data, vol. 6, no. 1, pp. 1–48, 2019.
  154. T. T. Um, F. M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić, “Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks,” in Proc. 19th ACM int. conf. multimodal interaction, 2017, pp. 216–220.
  155. K. M. Rashid and J. Louis, “Window-warping: a time series data augmentation of imu data for construction equipment activity identification,” in ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, vol. 36.   IAARC Publications, 2019, pp. 651–657.
  156. B. K. Iwana and S. Uchida, “Time series data augmentation for neural networks by time warping with a discriminative teacher,” in 2020 25th International Conference on Pattern Recognition (ICPR).   IEEE, 2021, pp. 3558–3565.
  157. T.-S. Nguyen, S. Stueker, J. Niehues, and A. Waibel, “Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2020, pp. 7689–7693.
  158. B. Vachhani, C. Bhat, and S. K. Kopparapu, “Data augmentation using healthy speech for dysarthric speech recognition.” in Interspeech, 2018, pp. 471–475.
  159. J. Gao, X. Song, Q. Wen, P. Wang, L. Sun, and H. Xu, “Robusttad: Robust time series anomaly detection via decomposition and convolutional neural networks,” 2020.
  160. Z. Cui, W. Chen, and Y. Chen, “Multi-scale convolutional neural networks for time series classification,” 2016.
  161. A. Le Guennec, S. Malinowski, and R. Tavenard, “Data Augmentation for Time Series Classification using Convolutional Neural Networks,” in ECML/PKDD on Advanced Analytics and Learning on Temporal Data, 2016.
  162. G. Forestier, F. Petitjean, H. A. Dau, G. I. Webb, and E. Keogh, “Generating synthetic time series to augment sparse datasets,” in 2017 IEEE international conference on data mining (ICDM).   IEEE, 2017, pp. 865–870.
  163. H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Data augmentation using synthetic data for time series classification with deep residual networks,” 2018.
  164. T. Terefe, M. Devanne, J. Weber, D. Hailemariam, and G. Forestier, “Time series averaging using multi-tasking autoencoder,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI).   IEEE, 2020, pp. 1065–1072.
  165. B. K. Iwana and S. Uchida, “An empirical survey of data augmentation for time series classification with neural networks,” Plos one, vol. 16, no. 7, p. e0254841, 2021.
  166. G. Pialla, M. Devanne, J. Weber, L. Idoumghar, and G. Forestier, “Data augmentation for time series classification with deep learning models,” in International Workshop on Advanced Analytics and Learning on Temporal Data.   Springer, 2022, pp. 117–132.
  167. Z. Gao, L. Li, and T. Xu, “Data augmentation for time-series classification: An extensive empirical study and comprehensive survey,” arXiv preprint arXiv:2310.10060, 2023.
  168. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conf. comp. vision patt. recognit.   Ieee, 2009, pp. 248–255.
  169. H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Transfer learning for time series classification,” in 2018 IEEE international conference on big data (Big Data).   IEEE, 2018, pp. 1367–1376.
  170. S. Spiegel, “Transfer learning for time series classification in dissimilarity spaces,” Proceedings of AALTD, vol. 78, 2016.
  171. F. Li, K. Shirahama, M. A. Nisar, X. Huang, and M. Grzegorzek, “Deep transfer learning for time series data based on sensor modality classification,” Sensors, vol. 20, no. 15, p. 4271, 2020.
  172. Y. Rotem, N. Shimoni, L. Rokach, and B. Shapira, “Transfer learning for time series classification using synthetic data generation,” in International Symposium on Cyber Security, Cryptology, and Machine Learning.   Springer, 2022, pp. 232–246.
  173. A. Senanayaka, A. Al Mamun, G. Bond, W. Tian, H. Wang, S. Fuller, T. Falls, S. Rahimi, and L. Bian, “Similarity-based multi-source transfer learning approach for time series classification,” International Journal of Prognostics and Health Management, vol. 13, no. 2, 2022.
  174. K. Kashiparekh, J. Narwariya, P. Malhotra, L. Vig, and G. Shroff, “Convtimenet: A pre-trained deep convolutional neural network for time series classification,” in 2019 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2019, pp. 1–8.
  175. D. Merlin Praveena, D. Angelin Sarah, and S. Thomas George, “Deep Learning Techniques for EEG Signal Applications–A Review,” IETE Journal of Research, vol. 68, no. 4, pp. 3030–3037, 2022.
  176. X. Liu, H. Wang, Z. Li, and L. Qin, “Deep learning in ECG diagnosis: A review,” Knowledge-Based Systems, vol. 227, p. 107187, 2021.
  177. N. Zaini, L. W. Ean, A. N. Ahmed, and M. A. Malek, “A systematic literature review of deep learning neural network for time series air quality forecasting,” Environmental Science and Pollution Research, vol. 29, no. 4, pp. 4958–4990, jan 2022.
  178. B. Zhang, Y. Rong, R. Yong, D. Qin, M. Li, G. Zou, and J. Pan, “Deep learning for air pollutant concentration prediction: A review,” Atmospheric Environment, vol. 290, p. 119347, dec 2022.
  179. G. Toh and J. Park, “Review of Vibration-Based Structural Health Monitoring Using Deep Learning,” Appl. Sci., vol. 10, no. 5, p. 1680, 2020.
  180. N. M. Thoppil, V. Vasu, and C. S. P. Rao, “Deep Learning Algorithms for Machinery Health Prognostics Using Time-Series Data: A Review,” Journal of Vibration Engineering & Technologies, vol. 9, no. 6, pp. 1123–1145, sep 2021.
  181. L. Ren, Z. Jia, Y. Laili, and D. Huang, “Deep Learning for Time-Series Prediction in IIoT: Progress, Challenges, and Prospects,” IEEE Transactions on Neural Networks and Learning Systems, vol. PP, pp. 1–20, 2023.
  182. Y. Himeur, K. Ghanem, A. Alsalemi, F. Bensaali, and A. Amira, “Artificial intelligence based anomaly detection of energy consumption in buildings: A review, current trends and new perspectives,” Applied Energy, vol. 287, p. 116601, 2021.
  183. D. Stowell, “Computational bioacoustics with deep learning: a review and roadmap,” PeerJ, vol. 10, p. e13152, mar 2022.
  184. N. Gupta, S. K. Gupta, R. K. Pathak, V. Jain, P. Rashidi, and J. S. Suri, “Human activity recognition in artificial intelligence framework: a narrative review,” Artificial Intelligence Review, vol. 55, no. 6, pp. 4755–4808, aug 2022.
  185. E. Ramanujam, T. Perumal, and S. Padmavathi, “Human activity recognition with smartphone and wearable sensors using deep learning techniques: A review,” IEEE Sensors Journal, vol. 21, no. 12, pp. 13 029–13 040, jun 2021.
  186. J. W. Lockhart, T. Pulickal, and G. M. Weiss, “Applications of mobile activity recognition,” in 2012 ACM Conference on Ubiquitous Computing - UbiComp ’12.   New York, New York, USA: ACM Press, 2012, p. 1054.
  187. E. M. Tapia, S. S. Intille, and K. Larson, “Activity recognition in the home using simple and ubiquitous sensors,” in Lecture Notes in Computer Science.   Berlin, Heidelberg: Springer, 2004, vol. 3001, pp. 158–175.
  188. Y. Kong and Y. Fu, “Human action recognition and prediction: A survey,” International Journal of Computer Vision, vol. 130, no. 5, pp. 1366–1401, may 2022.
  189. H.-B. Zhang, Y.-X. Zhang, B. Zhong, Q. Lei, L. Yang, J.-X. Du, and D.-S. Chen, “A comprehensive survey of vision-based human action recognition methods,” Sensors, vol. 19, no. 5, p. 1005, feb 2019.
  190. F. Ordóñez and D. Roggen, “Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition,” Sensors, vol. 16, no. 1, p. 115, jan 2016.
  191. A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” in 16th Int. Symp. Wearable Computers, 2012, pp. 108–109.
  192. M. Zhang and A. A. Sawchuk, “USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors,” in 2012 ACM Conference on Ubiquitous Computing - UbiComp ’12.   New York, New York, USA: ACM Press, 2012, p. 1036.
  193. D. Roggen, A. Calatroni, M. Rossi, T. Holleczek, K. Förster, G. Tröster, P. Lukowicz, D. Bannach, G. Pirkl et al., “Collecting complex activity datasets in highly rich networked sensor environments,” in Seventh international conference on networked sensing systems.   IEEE, 2010, pp. 233–240.
  194. T. Sztyler, H. Stuckenschmidt, and W. Petrich, “Position-aware activity recognition with wearable devices,” Pervasive and Mobile Computing, vol. 38, pp. 281–295, jul 2017.
  195. O. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1192–1209, 2013.
  196. F. Gu, M.-H. Chung, M. Chignell, S. Valaee, B. Zhou, and X. Liu, “A survey on deep learning for human activity recognition,” ACM Computing Surveys, vol. 54, no. 8, pp. 1–34, nov 2022.
  197. N. Y. Hammerla, S. Halloran, and T. Ploetz, “Deep, convolutional, and recurrent Models for human activity recognition using wearables,” IJCAI International Joint Conference on Artificial Intelligence, vol. 2016-Janua, pp. 1533–1540, apr 2016.
  198. M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang, “Convolutional neural networks for human activity recognition using mobile sensors,” in 6th International Conference on Mobile Computing, Applications and Services.   ICST, 2014, pp. 718–737.
  199. W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural Networks,” in 23rd ACM international conference on Multimedia.   New York, NY, USA: ACM, oct 2015, pp. 1307–1310.
  200. J. B. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multichannel time series for human activity recognition,” IJCAI International Joint Conference on Artificial Intelligence, vol. 2015-Janua, pp. 3995–4001, 2015.
  201. C. A. Ronao and S.-B. Cho, “Human activity recognition with smartphone sensors using deep learning neural networks,” Expert Systems with Applications, vol. 59, pp. 235–244, oct 2016.
  202. Y. Guan and T. Plötz, “Ensembles of deep LSTM learners for activity recognition using wearables,” ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 2, pp. 1–28, jun 2017.
  203. S.-M. Lee, S. M. Yoon, and H. Cho, “Human activity recognition from accelerometer data using Convolutional Neural Network,” in 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), vol. 83.   IEEE, feb 2017, pp. 131–134.
  204. A. Murad and J.-Y. Pyun, “Deep recurrent neural networks for human activity recognition,” Sensors, vol. 17, no. 11, p. 2556, nov 2017.
  205. A. Ignatov, “Real-time human activity recognition from accelerometer data using Convolutional Neural Networks,” Applied Soft Computing, vol. 62, pp. 915–922, jan 2018.
  206. F. Moya Rueda, R. Grzeszick, G. Fink, S. Feldhorst, and M. ten Hompel, “Convolutional neural networks for human activity recognition using body-worn sensors,” Informatics, vol. 5, no. 2, p. 26, may 2018.
  207. R. Yao, G. Lin, Q. Shi, and D. C. Ranasinghe, “Efficient dense labelling of human activity sequences from wearables using fully convolutional networks,” Pattern Recognition, vol. 78, pp. 252–266, jun 2018.
  208. M. Zeng, H. Gao, T. Yu, O. J. Mengshoel, H. Langseth, I. Lane, and X. Liu, “Understanding and improving recurrent networks for human activity recognition by continuous attention,” in ACM International Symposium on Wearable Computers, New York, NY, USA, 2018, pp. 56–63.
  209. H. Ma, W. Li, X. Zhang, S. Gao, and S. Lu, “AttnSense: Multi-level attention mechanism for multimodal human activity recognition,” in Twenty-Eighth International Joint Conference on Artificial Intelligence, California, 2019, pp. 3109–3115.
  210. C. Xu, D. Chai, J. He, X. Zhang, and S. Duan, “InnoHAR: A deep neural network for complex human activity recognition,” IEEE Access, vol. 7, pp. 9893–9902, 2019.
  211. H. Zhang, Z. Xiao, J. Wang, F. Li, and E. Szczerbicki, “A novel IoT-perceptive human activity recognition (HAR) approach using multihead convolutional attention,” IEEE Internet of Things Journal, vol. 7, no. 2, pp. 1072–1080, feb 2020.
  212. S. K. Challa, A. Kumar, and V. B. Semwal, “A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data,” The Visual Computer, no. 0123456789, aug 2021.
  213. S. Mekruksavanich and A. Jitpattanakul, “Deep Convolutional Neural Network with RNNs for complex activity recognition using wrist-worn wearable sensor data,” Electronics, vol. 10, no. 14, p. 1685, jul 2021.
  214. L. Chen, X. Liu, L. Peng, and M. Wu, “Deep learning based multimodal complex human activity recognition using wearable devices,” Applied Intelligence, vol. 51, no. 6, pp. 4029–4042, jun 2021.
  215. S. Mekruksavanich and A. Jitpattanakul, “LSTM networks using smartphone data for sensor-based human activity recognition in smart homes,” Sensors, vol. 21, no. 5, p. 1636, feb 2021.
  216. ——, “Biometric user identification based on human activity recognition using wearable sensors: An experiment using deep learning models,” Electronics, vol. 10, no. 3, p. 308, jan 2021.
  217. O. Nafea, W. Abdul, G. Muhammad, and M. Alsulaiman, “Sensor-based human activity recognition with spatio-temporal deep learning,” Sensors, vol. 21, no. 6, p. 2141, mar 2021.
  218. S. P. Singh, M. K. Sharma, A. Lay-Ekuakille, D. Gangwar, and S. Gupta, “Deep ConvLSTM with self-attention for human activity decoding using wearable sensors,” IEEE Sensors Journal, vol. 21, no. 6, pp. 8575–8582, mar 2021.
  219. X. Wang, L. Zhang, W. Huang, S. Wang, H. Wu, J. He, and A. Song, “Deep convolutional networks with tunable speed–accuracy tradeoff for human activity recognition using wearables,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
  220. S. Xu, L. Zhang, W. Huang, H. Wu, and A. Song, “Deformable convolutional networks for multimodal human activity recognition using wearable sensors,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–14, 2022.
  221. J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” in 2017 IEEE Int. Conf. Computer Vision (ICCV), 2017, pp. 764–773.
  222. M. A. Wulder, J. C. White, S. N. Goward, J. G. Masek, J. R. Irons, M. Herold, W. B. Cohen, T. R. Loveland, and C. E. Woodcock, “Landsat continuity: Issues and opportunities for land cover monitoring,” Remote Sensing of Environment, vol. 112, no. 3, pp. 955–969, mar 2008.
  223. W. Emery and A. Camps, “Basic electromagnetic concepts and applications to optical sensors,” in Introduction to Satellite Remote Sensing, W. Emery and A. Camps, Eds.   Elsevier, jan 2017, ch. 2, pp. 43–83.
  224. N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore, “Google Earth Engine: Planetary-scale geospatial analysis for everyone,” Remote Sensing of Environment, vol. 202, pp. 18–27, dec 2017.
  225. G. Giuliani, B. Chatenoux, A. De Bono, D. Rodila, J.-P. Richard, K. Allenbach, H. Dao, and P. Peduzzi, “Building an Earth observations data cube: lessons learned from the Swiss data cube (SDC) on generating analysis ready data (ARD),” Big Earth Data, vol. 1, no. 1-2, pp. 100–117, dec 2017.
  226. A. Lewis, S. Oliver, L. Lymburner, B. Evans, L. Wyborn, N. Mueller, G. Raevksi, J. Hooke, R. Woodcock, J. Sixsmith et al., “The australian geoscience data cube—foundations and lessons learned,” Remote Sensing of Environment, vol. 202, pp. 276–292, 2017.
  227. D. Ienco, Y. J. E. Gbodjo, R. Interdonato, and R. Gaetano, “Attentive weakly supervised land cover mapping for object-based satellite image time series data with spatial interpretation,” arXiv, pp. 1–12, 2020.
  228. V. Sainte Fare Garnot, L. Landrieu, S. Giordano, and N. Chehata, “Satellite image time Series classification With Pixel-Set encoders and temporal self-attention,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, jun 2020, pp. 12 322–12 331.
  229. A. Kulshrestha, L. Chang, and A. Stein, “Use of LSTM for sinkhole-related anomaly detection and classification of InSAR deformation time series,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 4559–4570, 2022.
  230. Y. Ban, P. Zhang, A. Nascetti, A. R. Bevington, and M. A. Wulder, “Near real-time wildfire progression monitoring with Sentinel-1 SAR time series and deep learning,” Scientific Reports, vol. 10, no. 1, p. 1322, dec 2020.
  231. C. Rambour, N. Audebert, E. Koeniguer, B. Le Saux, M. Crucianu, and M. Datcu, “Flood detection in time series of optical and SAR images,” Int. Archives Photogrammetry, Remote Sens. & Spatial Inf. Sci., vol. XLIII-B2-2, no. B2, pp. 1343–1346, aug 2020.
  232. G. Kamdem De Teyou, Y. Tarabalka, I. Manighetti, R. Almar, and S. Tripodi, “Deep neural networks for automatic extraction of features in time series optical satellite images,” Int. Archives Photogrammetry, Remote Sens. & Spatial Inf. Sci., vol. 43, 2020.
  233. B. M. Matosak, L. M. G. Fonseca, E. C. Taquary, R. V. Maretto, H. D. N. Bendini, and M. Adami, “Mapping deforestation in Cerrado based on hybrid deep learning architecture and medium spatial resolution satellite time series,” Remote Sensing, vol. 14, no. 1, pp. 1–22, 2022.
  234. D. Ho Tong Minh, D. Ienco, R. Gaetano, N. Lalande, E. Ndikumana, F. Osman, and P. Maurel, “Deep recurrent neural networks for winter vegetation quality mapping via multitemporal SAR Sentinel-1,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 3, pp. 464–468, mar 2018.
  235. P. Labenski, M. Ewald, S. Schmidtlein, and F. E. Fassnacht, “Classifying surface fuel types based on forest stand photographs and satellite time series using deep learning,” International Journal of Applied Earth Observation and Geoinformation, vol. 109, p. 102799, may 2022.
  236. K. Rao, A. P. Williams, J. F. Flefil, and A. G. Konings, “SAR-enhanced mapping of live fuel moisture content,” Remote Sens. Environ., vol. 245, p. 111797, 2020.
  237. L. Zhu, G. I. Webb, M. Yebra, G. Scortechini, L. Miller, and F. Petitjean, “Live fuel moisture content estimation from MODIS: A deep learning approach,” ISPRS J. Photogramm. Remote Sens., vol. 179, pp. 81–91, sep 2021.
  238. L. Miller, L. Zhu, M. Yebra, C. Rüdiger, and G. I. Webb, “Multi-modal temporal CNNs for live fuel moisture content estimation,” Environmental Modelling & Software, vol. 156, p. 105467, oct 2022.
  239. J. Xie, T. Qi, W. Hu, H. Huang, B. Chen, and J. Zhang, “Retrieval of live fuel moisture content based on multi-source remote sensing data and ensemble deep learning model,” Remote Sensing, vol. 14, no. 17, p. 4378, sep 2022.
  240. K. Lahssini, F. Teste, K. R. Dayal, S. Durrieu, D. Ienco, and J.-M. Monnet, “Combining LiDAR metrics and Sentinel-2 imagery to estimate basal area and wood volume in complex forest environment via neural networks,” IEEE J. Selected Topics Applied Earth Obs. Remote Sens., vol. 15, pp. 4337–4348, 2022.
  241. J. Sun, Z. Lai, L. Di, Z. Sun, J. Tao, and Y. Shen, “Multilevel deep learning network for county-level corn yield estimation in the U.S. Corn Belt,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 5048–5060, 2020.
  242. Z. Li, G. Chen, and T. Zhang, “Temporal attention networks for multitemporal multisensor crop classification,” IEEE Access, vol. 7, pp. 134 677–134 690, 2019.
  243. Z. Li, G. Zhou, and Q. Song, “A temporal group attention approach for multitemporal multisensor crop classification,” Infrared Physics and Technology, vol. 105, p. 103152, 2020.
  244. S. Ji, C. Zhang, A. Xu, Y. Shi, and Y. Duan, “3D convolutional neural networks for crop classification with multi-temporal remote sensing images,” Remote Sensing, vol. 10, no. 2, p. 75, jan 2018.
  245. J. Xu, Y. Zhu, R. Zhong, Z. Lin, J. Xu, H. Jiang, J. Huang, H. Li, and T. Lin, “DeepCropMapping: A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping,” Remote Sensing of Environment, vol. 247, p. 111946, sep 2020.
  246. V. Barriere and M. Claverie, “Multimodal crop type classification fusing multi-spectral satellite time series with farmers crop rotations and local crop distribution,” arXiv preprint:2208.10838, 2022.
  247. V. S. F. Garnot and L. Landrieu, “Lightweight temporal self-attention for classifying satellite images time series,” in Lecture Notes in Computer Science.   Springer International Publishing, 2020, vol. 12588 LNAI, pp. 171–181.
  248. S. Ofori-Ampofo, C. Pelletier, and S. Lang, “Crop type mapping from optical and radar time series using attention-based deep learning,” Remote Sensing, vol. 13, no. 22, p. 4668, nov 2021.
  249. Y. Yuan and L. Lin, “Self-Supervised pretraining of transformers for satellite image time series classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 474–487, 2021.
  250. N. Di Mauro, A. Vergari, T. M. A. Basile, F. G. Ventola, and F. Esposito, “End-to-end learning of deep spatio-temporal representations for satellite image time series classification.” in DC@ PKDD/ECML, 2017.
  251. N. Kussul, M. Lavreniuk, S. Skakun, and A. Shelestov, “Deep learning classification of land cover and crop types using remote sensing data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 778–782, may 2017.
  252. C. Pelletier, G. Webb, and F. Petitjean, “Temporal convolutional neural network for the classification of satellite image time series,” Remote Sensing, vol. 11, no. 5, p. 523, mar 2019.
  253. P. Dou, H. Shen, Z. Li, and X. Guan, “Time series remote sensing image classification framework using combination of deep learning and multiple classifiers system,” International Journal of Applied Earth Observation and Geoinformation, vol. 103, p. 102477, 2021.
  254. D. Ienco, R. Interdonato, R. Gaetano, and D. Ho Tong Minh, “Combining Sentinel-1 and Sentinel-2 satellite image time series for land cover mapping via a multi-source deep learning architecture,” ISPRS J. Photogramm. Remote Sens., vol. 158, pp. 11–22, 2019.
  255. R. Interdonato, D. Ienco, R. Gaetano, and K. Ose, “DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn,” ISPRS J. Photogramm. Remote Sens., vol. 149, pp. 91–104, mar 2019.
  256. M. Rußwurm and M. Körner, “Multi-Temporal land cover classification with sequential recurrent encoders,” ISPRS International Journal of Geo-Information, vol. 7, no. 4, p. 129, mar 2018.
  257. A. Stoian, V. Poulain, J. Inglada, V. Poughon, and D. Derksen, “Land cover maps production with high resolution satellite image time series and convolutional neural networks: Adaptations and limits for operational systems,” Remote Sensing, vol. 11, no. 17, pp. 1–26, 2019.
  258. D. Ienco, R. Gaetano, C. Dupaquier, and P. Maurel, “Land cover classification via multitemporal spatial data by deep recurrent neural networks,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 10, pp. 1685–1689, oct 2017.
  259. Y. J. E. Gbodjo, D. Ienco, L. Leroux, R. Interdonato, R. Gaetano, and B. Ndao, “Object-based multi-temporal and multi-source land cover mapping leveraging hierarchical class relationships,” Remote Sensing, vol. 12, no. 17, p. 2814, aug 2020.
  260. D. Ienco, R. Gaetano, R. Interdonato, K. Ose, and D. Ho Tong Minh, “Combining Sentinel-1 and Sentinel-2 time series via RNN for object-based land cover classification,” in IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, jul 2019, pp. 4881–4884.
  261. Y. Yuan, L. Lin, Q. Liu, R. Hang, and Z.-G. Zhou, “SITS-Former: A pre-trained spatio-spectral-temporal representation model for Sentinel-2 time series classification,” International Journal of Applied Earth Observation and Geoinformation, vol. 106, p. 102651, feb 2022.
  262. M. Qiao, X. He, X. Cheng, P. Li, H. Luo, L. Zhang, and Z. Tian, “Crop yield prediction from multi-spectral, multi-temporal remotely sensed imagery using recurrent 3d convolutional neural networks,” International Journal of Applied Earth Observation and Geoinformation, vol. 102, p. 102436, 2021.
  263. M. Rußwurm and M. Körner, “Self-attention for raw optical satellite time series classification,” ISPRS J. Photogramm. Remote Sens., vol. 169, pp. 421–435, 2020.
  264. D. Tuia, C. Persello, and L. Bruzzone, “Domain adaptation for the classification of remote sensing data: An overview of recent advances,” IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 41–57, 2016.
  265. V. S. F. Garnot, L. Landrieu, S. Giordano, and N. Chehata, “Time-space tradeoff in deep learning models for crop classification on satellite multi-spectral image time series,” in IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium.   IEEE, 2019, pp. 6247–6250.
  266. H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep neural network ensembles for time series classification,” in 2019 International Joint Conference on Neural Networks (IJCNN), vol. 2019-July.   IEEE, jul 2019, pp. 1–6.
  267. D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241–259, jan 1992.
  268. Y. Freund and R. E. Schapire, “Experiments with a new boosting algorithm,” in 13th Int. conf. mach. learn., 1996, pp. 148–156.
  269. C. Gómez, J. C. White, and M. A. Wulder, “Optical remotely sensed time series data for land cover classification: A review,” ISPRS J. Photogramm. Remote Sens., vol. 116, pp. 55–72, 2016.
  270. X. X. Zhu, D. Tuia, L. Mou, G. S. Xia, L. Zhang, F. Xu, and F. Fraundorfer, “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 4, pp. 8–36, 2017.
  271. L. Ma, Y. Liu, X. Zhang, Y. Ye, G. Yin, and B. A. Johnson, “Deep learning in remote sensing applications: A meta-analysis and review,” ISPRS J. Photogramm. Remote Sens., vol. 152, pp. 166–177, jun 2019.
  272. Q. Yuan, H. Shen, T. Li, Z. Li, S. Li, Y. Jiang, H. Xu, W. Tan, Q. Yang, J. Wang, J. Gao, and L. Zhang, “Deep learning in environmental remote sensing: Achievements and challenges,” Remote Sensing of Environment, vol. 241, p. 111716, may 2020.
  273. M. E. D. Chaves, M. C. A. Picoli, and I. D. Sanches, “Recent applications of Landsat 8/OLI and Sentinel-2/MSI for land use and land cover mapping: A systematic review,” Remote Sensing, vol. 12, no. 18, p. 3062, sep 2020.
  274. W. R. Moskolaï, W. Abdou, A. Dipanda, and Kolyang, “Application of deep learning architectures for satellite image time series prediction: A review,” Remote Sensing, vol. 13, no. 23, p. 4822, nov 2021.
  275. J. Lines and A. Bagnall, “Time series classification with ensembles of elastic distance measures,” Data Min. Knowl. Discov., vol. 29, no. 3, pp. 565–592, 2015.
  276. C. W. Tan, F. Petitjean, and G. I. Webb, “FastEE: Fast Ensembles of Elastic Distances for time series classification,” Data Min. Knowl. Discov., vol. 34, no. 1, pp. 231–272, 2020.
  277. M. Herrmann and G. I. Webb, “Amercing: An intuitive, elegant and effective constraint for dynamic time warping,” arXiv preprint:2111.13314, 2021.
  278. A. Bagnall, M. Flynn, J. Large, J. Lines, and M. Middlehurst, “On the usage and performance of the Hierarchical Vote Collective of Transformation-based Ensembles version 1.0 (hive-cote v1. 0),” in International Workshop on Advanced Analytics and Learning on Temporal Data, 2020, pp. 3–18.
  279. M. Middlehurst, J. Large, M. Flynn, J. Lines, A. Bostrom, and A. Bagnall, “HIVE-COTE 2.0: a new meta ensemble for time series classification,” Machine Learning, vol. 110, no. 11, pp. 3211–3243, 2021.
  280. A. Bagnall, J. Lines, J. Hills, and A. Bostrom, “Time-series classification with COTE: the collective of transformation-based ensembles,” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 9, pp. 2522–2535, 2015.
  281. J. Lines, S. Taylor, and A. Bagnall, “Time series classification with HIVE-COTE: The hierarchical vote collective of transformation-based ensembles,” ACM Transactions on Knowledge Discovery from Data, vol. 12, no. 5, 2018.
  282. ——, “Hive-Cote: The hierarchical vote collective of transformation-based ensembles for time series classification,” in 2016 IEEE 16th international conference on data mining (ICDM).   IEEE, 2016, pp. 1041–1046.
  283. R. J. Kate, “Using dynamic time warping distances as features for improved time series classification,” Data Min. Knowl. Discov., vol. 30, no. 2, pp. 283–312, 2016.
  284. A. Bostrom and A. Bagnall, “Binary shapelet transform for multiclass time series classification,” in Int. conf. big data analytics .knowl. disco.   Springer, 2015, pp. 257–269.
  285. P. Schäfer, “The boss is concerned with time series classification in the presence of noise,” Data Min. Knowl. Discov., vol. 29, no. 6, pp. 1505–1530, 2015.
  286. J. Hills, J. Lines, E. Baranauskas, J. Mapp, and A. Bagnall, “Classification of time series by shapelet transformation,” Data Min. Knowl. Discov., vol. 28, no. 4, pp. 851–881, 2014.
  287. H. Deng, G. Runger, E. Tuv, and M. Vladimir, “A time series forest for classification and feature extraction,” Inf. Sci., vol. 239, pp. 142–153, 2013.
  288. M. G. Baydogan, G. Runger, and E. Tuv, “A bag-of-features framework to classify time series,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 11, pp. 2796–2802, 2013.
  289. A. Dempster, D. F. Schmidt, and G. I. Webb, “Minirocket: A very fast (almost) deterministic transform for time series classification,” in 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 248–257.
  290. C. W. Tan, A. Dempster, C. Bergmeir, and G. I. Webb, “MultiRocket: multiple pooling operators and transformations for fast and effective time series classification,” Data Min. Knowl. Discov., jun 2022.
  291. A. Dempster, D. F. Schmidt, and G. I. Webb, “Hydra: Competing convolutional kernels for fast and accurate time series classification,” Data Mining and Knowledge Discovery, pp. 1–27, 2023.
  292. B. Lucas, A. Shifaz, C. Pelletier, L. O’Neill, N. Zaidi, B. Goethals, F. Petitjean, and G. I. Webb, “Proximity forest: an effective and scalable distance-based classifier for time series,” Data Mining and Knowledge Discovery, vol. 33, no. 3, pp. 607–635, 2019.
  293. M. Herrmann, C. W. Tan, M. Salehi, and G. I. Webb, “Proximity forest 2.0: A new effective and scalable similarity-based classifier for time series,” arXiv preprint arXiv:2304.05800, 2023.
  294. K. Fukushima and S. Miyake, “Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition,” in Competition and cooperation in neural nets.   Springer, 1982, pp. 267–285.
  295. D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” The Journal of physiology, vol. 160, no. 1, p. 106, 1962.
  296. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  297. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Icml, 2010.
  298. S. Hihi and Y. Bengio, “Hierarchical recurrent neural networks for long-term dependencies,” Advances neural inf. process. syst., vol. 8, 1995.
  299. R. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio, “How to construct deep recurrent neural networks,” arXiv preprint:1312.6026, 2013.
  300. K. Kawakami, “Supervised sequence labelling with recurrent neural networks,” Ph.D. dissertation, Technical University of Munich, 2008.
  301. D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint:1409.0473, 2014.
  302. K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” arXiv preprint:1406.1078, 2014.
  303. M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” arXiv preprint:1508.04025, 2015.
  304. J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral Networks and Locally Connected Networks on Graphs,” pp. 1–14, 2013. [Online]. Available: http://arxiv.org/abs/1312.6203
  305. D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98, 2013. [Online]. Available: http://ieeexplore.ieee.org/document/6494675/
  306. A. Longa, V. Lachi, G. Santin, M. Bianchini, B. Lepri, P. Lio, F. Scarselli, and A. Passerini, “Graph Neural Networks for temporal graphs: State of the art, open challenges, and opportunities,” 2023. [Online]. Available: http://arxiv.org/abs/2302.01018
  307. M. Bachlin, D. Roggen, G. Troster, M. Plotnik, N. Inbar, I. Meidan, T. Herman, M. Brozgol, E. Shaviv, N. Giladi, and J. M. Hausdorff, “Potentials of enhanced context awareness in wearable assistants for Parkinson’s Disease patients with the freezing of gait syndrome,” in 2009 International Symposium on Wearable Computers.   IEEE, sep 2009, pp. 123–130.
  308. D. Micucci, M. Mobilio, and P. Napoletano, “UniMiB SHAR: A dataset for human activity recognition using acceleration data from smartphones,” Applied Sciences, vol. 7, no. 10, p. 1101, oct 2017.
  309. P. Zappi, C. Lombriser, T. Stiefmeier, E. Farella, D. Roggen, L. Benini, and G. Tröster, “Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection,” in European Conference on Wireless Sensor Networks.   Springer, 2008, pp. 17–33.
  310. R. Chavarriaga, H. Sagha, A. Calatroni, S. T. Digumarti, G. Tröster, J. D. R. Millán, and D. Roggen, “The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition,” Pattern Recognition Letters, vol. 34, no. 15, pp. 2033–2042, nov 2013.
  311. A. Reiss and D. Stricker, “Creating and benchmarking a new dataset for physical activity monitoring,” in 5th Int. Conf. PErvasive Technologies Related to Assistive Environments - PETRA ’12.   New York, New York, USA: ACM Press, 2012, p. 1.
  312. D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public domain dataset for human activity recognition using smartphones,” in 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN, Bruges, Belgium, 2013, pp. 437–442.
  313. J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” ACM SIGKDD Explorations Newsletter, vol. 12, no. 2, pp. 74–82, mar 2011.
  314. U.S. Geological Survey, “Landsat Satellite Missions.” [Online]. Available: https://www.usgs.gov/landsat-missions/landsat-satellite-missions
  315. NASA, “MODIS Moderate Resolution Imaging Spectrometer.” [Online]. Available: https://modis.gsfc.nasa.gov/
  316. European Space Agency, “Sentinel Online,” 2019. [Online]. Available: https://sentinel.esa.int/web/sentinel/home
  317. ——, “Pleiades - Earth Online.” [Online]. Available: https://earth.esa.int/eogateway/missions/pleiades
  318. National Space Organization, “FORMOSAT-2,” 2020. [Online]. Available: https://www.nspo.narl.org.tw/history{_}prog.php?c=20030402{&}ln=en
  319. EoPortal, “Gaofen-1,” 2014. [Online]. Available: https://www.eoportal.org/satellite-missions/gaofen-1
  320. ——, “Gaofen-2,” 2015. [Online]. Available: https://www.eoportal.org/satellite-missions/gaofen-2
Citations (48)

Summary

We haven't generated a summary for this paper yet.