AdaFSNet: Time Series Classification Based on Convolutional Network with a Adaptive and Effective Kernel Size Configuration (2404.18246v1)
Abstract: Time series classification is one of the most critical and challenging problems in data mining, existing widely in various fields and holding significant research importance. Despite extensive research and notable achievements with successful real-world applications, addressing the challenge of capturing the appropriate receptive field (RF) size from one-dimensional or multi-dimensional time series of varying lengths remains a persistent issue, which greatly impacts performance and varies considerably across different datasets. In this paper, we propose an Adaptive and Effective Full-Scope Convolutional Neural Network (AdaFSNet) to enhance the accuracy of time series classification. This network includes two Dense Blocks. Particularly, it can dynamically choose a range of kernel sizes that effectively encompass the optimal RF size for various datasets by incorporating multiple prime numbers corresponding to the time series length. We also design a TargetDrop block, which can reduce redundancy while extracting a more effective RF. To assess the effectiveness of the AdaFSNet network, comprehensive experiments were conducted using the UCR and UEA datasets, which include one-dimensional and multi-dimensional time series data, respectively. Our model surpassed baseline models in terms of classification accuracy, underscoring the AdaFSNet network's efficiency and effectiveness in handling time series classification tasks.
- Q. Yang and X. Wu, “10 challenging problems in data mining research,” International Journal of Information Technology & Decision Making, vol. 5, no. 04, pp. 597–604, 2006.
- P. Esling and C. Agon, “Time-series data mining,” ACM Comput. Surv., vol. 45, no. 1, dec 2012. [Online]. Available: https://doi.org/10.1145/2379776.2379788
- B. Hu, Y. Chen, and E. Keogh, “Classification of streaming time series under more realistic assumptions,” Data mining and knowledge discovery, vol. 30, no. 2, pp. 403–437, 2016.
- S. Lai, L. Hu, J. Wang, L. Berti-Equille, and D. Wang, “Faithful vision-language interpretation via concept bottleneck models,” in The Twelfth International Conference on Learning Representations, 2023.
- A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data mining and knowledge discovery, vol. 31, pp. 606–660, 2017.
- M. Rußwurm and M. Körner, “Self-attention for raw optical satellite time series classification,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 169, pp. 421–435, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0924271620301647
- X. Xie, H. Liu, M. Shu, Q. Zhu, A. Huang, X. Kong, and Y. Wang, “A multi-stage denoising framework for ambulatory ecg signal based on domain knowledge and motion artifact detection,” Future Generation Computer Systems, vol. 116, pp. 103–116, 2021.
- S. Lai, X. Hu, H. Xu, Z. Ren, and Z. Liu, “Multimodal sentiment analysis: A survey,” Displays, p. 102563, 2023.
- J. Hills, J. Lines, E. Baranauskas, J. Mapp, and A. Bagnall, “Classification of time series by shapelet transformation,” Data mining and knowledge discovery, vol. 28, pp. 851–881, 2014.
- P. Schäfer, “The boss is concerned with time series classification in the presence of noise,” Data Mining and Knowledge Discovery, vol. 29, pp. 1505–1530, 2015.
- T. Górecki and M. Łuczak, “Using derivatives in time series classification,” Data Mining and Knowledge Discovery, vol. 26, pp. 310–331, 2013.
- S. Zhou, W. Shen, D. Zeng, M. Fang, Y. Wei, and Z. Zhang, “Spatial–temporal convolutional neural networks for anomaly detection and localization in crowded scenes,” Signal Processing: Image Communication, vol. 47, pp. 358–368, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0923596516300935
- N. Razavian and D. Sontag, “Temporal convolutional neural networks for diagnosis from lab tests,” arXiv preprint arXiv:1511.07938, 2015.
- W. Tang, G. Long, L. Liu, T. Zhou, M. Blumenstein, and J. Jiang, “Omni-scale cnns: a simple and effective kernel size configuration for time series classification,” arXiv preprint arXiv:2002.10061, 2020.
- A. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. Keogh, “The uea multivariate time series classification archive, 2018,” arXiv preprint arXiv:1811.00075, 2018.
- A. N. Gomez, I. Zhang, S. R. Kamalakara, D. Madaan, K. Swersky, Y. Gal, and G. E. Hinton, “Learning sparse networks using targeted dropout,” arXiv preprint arXiv:1905.13678, 2019.
- Z. Ouyang, Y. Feng, Z. He, T. Hao, T. Dai, and S.-T. Xia, “Attentiondrop for convolutional neural networks,” in 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019, pp. 1342–1347.
- B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8697–8710.
- C. Louizos, M. Welling, and D. P. Kingma, “Learning sparse neural networks through l_0𝑙_0l\_0italic_l _ 0 regularization,” arXiv preprint arXiv:1712.01312, 2017.
- H. Wang, Q. Zhang, Y. Wang, and H. Hu, “Structured probabilistic pruning for convolutional neural network acceleration,” arXiv preprint arXiv:1709.06994, 2017.
- H. Zhu and X. Zhao, “Targetdrop: A targeted regularization method for convolutional neural networks,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 3283–3287.
- S. Han, Z. Meng, Z. Li, J. O’Reilly, J. Cai, X. Wang, and Y. Tong, “Optimizing filter size in convolutional neural networks for facial action unit recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5070–5078.
- S. Wang, S. Suo, W.-C. Ma, A. Pokrovsky, and R. Urtasun, “Deep parametric continuous convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2589–2597.
- H. Ismail Fawaz, B. Lucas, G. Forestier, C. Pelletier, D. F. Schmidt, J. Weber, G. I. Webb, L. Idoumghar, P.-A. Muller, and F. Petitjean, “Inceptiontime: Finding alexnet for time series classification,” Data Mining and Knowledge Discovery, vol. 34, no. 6, pp. 1936–1962, 2020.
- C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
- Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” in 2017 International joint conference on neural networks (IJCNN). IEEE, 2017, pp. 1578–1585.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- F. Karim, S. Majumdar, H. Darabi, and S. Chen, “Lstm fully convolutional networks for time series classification,” IEEE Access, vol. 6, pp. 1662–1669, 2018.
- Z. Cui, W. Chen, and Y. Chen, “Multi-scale convolutional neural networks for time series classification,” arXiv preprint arXiv:1603.06995, 2016.
- A. Dempster, F. Petitjean, and G. I. Webb, “Rocket: exceptionally fast and accurate time series classification using random convolutional kernels,” Data Mining and Knowledge Discovery, vol. 34, no. 5, pp. 1454–1495, 2020.
- F. Karim, S. Majumdar, H. Darabi, and S. Harford, “Multivariate lstm-fcns for time series classification,” Neural networks, vol. 116, pp. 237–245, 2019.
- H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Deep learning for time series classification: a review,” Data mining and knowledge discovery, vol. 33, no. 4, pp. 917–963, 2019.