Data Augmentation for Time-Series Classification: An Extensive Empirical Study and Comprehensive Survey (2310.10060v5)
Abstract: Data Augmentation (DA) has become a critical approach in Time Series Classification (TSC), primarily for its capacity to expand training datasets, enhance model robustness, introduce diversity, and reduce overfitting. However, the current landscape of DA in TSC is plagued with fragmented literature reviews, nebulous methodological taxonomies, inadequate evaluative measures, and a dearth of accessible and user-oriented tools. This study addresses these challenges through a comprehensive examination of DA methodologies within the TSC domain.Our research began with an extensive literature review spanning a decade, revealing significant gaps in existing surveys and necessitating a detailed analysis of over 100 scholarly articles to identify more than 60 distinct DA techniques. This rigorous review led to the development of a novel taxonomy tailored to the specific needs of DA in TSC, categorizing techniques into five primary categories: Transformation-Based, Pattern-Based, Generative, Decomposition-Based, and Automated Data Augmentation. This taxonomy is intended to guide researchers in selecting appropriate methods with greater clarity. In response to the lack of comprehensive evaluations of foundational DA techniques, we conducted a thorough empirical study, testing nearly 20 DA strategies across 15 diverse datasets representing all types within the UCR time-series repository. Using ResNet and LSTM architectures, we employed a multifaceted evaluation approach, including metrics such as Accuracy, Method Ranking, and Residual Analysis, resulting in a benchmark accuracy of 84.98 +- 16.41% in ResNet and 82.41 +- 18.71% in LSTM. Our investigation underscored the inconsistent efficacies of DA techniques, for instance, methods like RGWs and Random Permutation significantly improved model performance, whereas others, like EMD, were less effective.
- C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of big data, vol. 6, no. 1, pp. 1–48, 2019.
- A. Jung et al., “imgaug: Image augmentation for machine learning experiments,” p. Accessed, vol. 3, pp. 977–997, 2017.
- G. Forestier, J. Weber, L. Idoumghar, P. Muller et al., “Deep learning for time series classification: a review,” Data Min. Knowl. Discov., vol. 33, no. 4, pp. 917–963, 2019.
- B. K. Iwana and S. Uchida, “An empirical survey of data augmentation for time series classification with neural networks,” Plos one, vol. 16, no. 7, p. e0254841, 2021.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
- T. T. Um, F. M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić, “Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks,” in Proceedings of the 19th ACM international conference on multimodal interaction, 2017, pp. 216–220.
- K. M. Rashid and J. Louis, “Window-warping: a time series data augmentation of imu data for construction equipment activity identification,” in ISARC. Proceedings of the international symposium on automation and robotics in construction, vol. 36. IAARC Publications, 2019, pp. 651–657.
- C. M. Bishop, “Training with noise is equivalent to tikhonov regularization,” Neural computation, vol. 7, no. 1, pp. 108–116, 1995.
- G. An, “The effects of adding noise during backpropagation training on a generalization performance,” Neural computation, vol. 8, no. 3, pp. 643–674, 1996.
- E. Talavera, G. Iglesias, Á. González-Prieto, A. Mozo, and S. Gómez-Canaval, “Data augmentation techniques in time series domain: A survey and taxonomy,” arXiv preprint arXiv:2206.13508, 2022.
- A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data mining and knowledge discovery, vol. 31, pp. 606–660, 2017.
- J. C. B. Gamboa, “Deep learning for time-series analysis,” arXiv preprint arXiv:1701.01887, 2017.
- H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh, “The ucr time series archive,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1293–1305, 2019.
- P. Li, S. F. Boubrahimi, and S. M. Hamdi, “Shapelets-based data augmentation for time series classification,” in 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2021, pp. 1373–1378.
- A. Le Guennec, S. Malinowski, and R. Tavenard, “Data augmentation for time series classification using convolutional neural networks,” in ECML/PKDD workshop on advanced analytics and learning on temporal data, 2016.
- G. Forestier, F. Petitjean, H. A. Dau, G. I. Webb, and E. Keogh, “Generating synthetic time series to augment sparse datasets,” in 2017 IEEE international conference on data mining (ICDM). IEEE, 2017, pp. 865–870.
- S. Haradal, H. Hayashi, and S. Uchida, “Biosignal data augmentation based on generative adversarial networks,” in 2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 2018, pp. 368–371.
- E. Fons, P. Dawson, X.-j. Zeng, J. Keane, and A. Iosifidis, “Evaluating data augmentation for financial time series classification,” arXiv preprint arXiv:2010.15111, 2020.
- N. Nonaka and J. Seita, “Data augmentation for electrocardiogram classification with deep neural network,” arXiv preprint arXiv:2009.04398, 2020.
- M. Goubeaud, P. Joußen, N. Gmyrek, F. Ghorban, and A. Kummert, “White noise windows: Data augmentation for time series,” in 2021 7th International Conference on Optimization and Applications (ICOA). IEEE, 2021, pp. 1–5.
- Q. Wen, L. Sun, F. Yang, X. Song, J. Gao, X. Wang, and H. Xu, “Time series data augmentation for deep learning: A survey,” arXiv preprint arXiv:2002.12478, 2020.
- H. Naveed, S. Anwar, M. Hayat, K. Javed, and A. Mian, “Survey: Image mixing and deleting for data augmentation,” arXiv preprint arXiv:2106.07085, 2021.
- S. Y. Feng, V. Gangal, J. Wei, S. Chandar, S. Vosoughi, T. Mitamura, and E. Hovy, “A survey of data augmentation approaches for nlp,” arXiv preprint arXiv:2105.03075, 2021.
- C. Wohlin, “Guidelines for snowballing in systematic literature studies and a replication in software engineering,” in Proceedings of the 18th international conference on evaluation and assessment in software engineering, 2014, pp. 1–10.
- D. Zhang, J. Liu, and C. Liang, “Perspective on how laser-ablated particles grow in liquids,” Science China Physics, Mechanics & Astronomy, vol. 60, pp. 1–16, 2017.
- Z. Cai, W. Ma, X. Wang, H. Wang, and Z. Feng, “The performance analysis of time series data augmentation technology for small sample communication device recognition,” IEEE Transactions on Reliability, 2022.
- B. Liu, Z. Zhang, and R. Cui, “Efficient time series augmentation methods,” in 2020 13th international congress on image and signal processing, BioMedical engineering and informatics (CISP-BMEI). IEEE, 2020, pp. 1004–1009.
- K. Kim and J. Jeong, “Deep learning-based data augmentation for hydraulic condition monitoring system,” Procedia Computer Science, vol. 175, pp. 20–27, 2020.
- K. M. Rashid and J. Louis, “Times-series data augmentation and deep learning for construction equipment activity recognition,” Advanced Engineering Informatics, vol. 42, p. 100944, 2019.
- L. Huang, W. Pan, Y. Zhang, L. Qian, N. Gao, and Y. Wu, “Data augmentation for deep learning-based radio modulation classification,” IEEE access, vol. 8, pp. 1498–1506, 2019.
- C. Li, K. K. Tokgoz, M. Fukawa, J. Bartels, T. Ohashi, K.-i. Takeda, and H. Ito, “Data augmentation for inertial sensor data in cnns for cattle behavior classification,” IEEE Sensors Letters, vol. 5, no. 11, pp. 1–4, 2021.
- S. Pi, S. Zhang, S. Wang, B. Guo, and W. Yan, “Improving modulation recognition using time series data augmentation via a spatiotemporal multi-channel framework,” Electronics, vol. 12, no. 1, p. 96, 2022.
- E. Do, J. Boynton, B. S. Lee, and D. Lustgarten, “Data augmentation for 12-lead ecg beat classification,” SN Computer Science, vol. 3, pp. 1–17, 2022.
- D. Warchoł and M. Oszust, “Efficient augmentation of human action recognition datasets with warped windows,” Procedia Computer Science, vol. 207, pp. 3018–3027, 2022.
- N. Jaitly and G. E. Hinton, “Vocal tract length perturbation (vtlp) improves speech recognition,” in Proc. ICML Workshop on Deep Learning for Audio, Speech and Language, vol. 117, 2013, p. 21.
- W. Yang, J. Yuan, and X. Wang, “Sfcc: data augmentation with stratified fourier coefficients combination for time series classification,” Neural Processing Letters, vol. 55, no. 2, pp. 1833–1846, 2023.
- M. Goubeaud, N. Gmyrek, F. Ghorban, L. Schelkes, and A. Kummert, “Random noise boxes: Data augmentation for spectrograms,” in 2021 IEEE International Conference on Progress in Informatics and Computing (PIC). IEEE, 2021, pp. 24–28.
- O. Steven Eyobu and D. S. Han, “Feature representation and data augmentation for human activity classification based on wearable imu sensor data using a deep lstm neural network,” Sensors, vol. 18, no. 9, p. 2892, 2018.
- T. E. K. Lee, Y. Kuah, K.-H. Leo, S. Sanei, E. Chew, and L. Zhao, “Surrogate rehabilitative time series data for image-based deep learning,” in 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019, pp. 1–5.
- R. H. Zanella, L. A. de Castro Coelho, and V. M. Souza, “Ts-dense: Time series data augmentation by subclass clustering,” in 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022, pp. 1800–1806.
- A. M. Aboussalah, M. Kwon, R. G. Patel, C. Chi, and C.-G. Lee, “Recursive time series data augmentation,” in The Eleventh International Conference on Learning Representations, 2022.
- K. Kamycki, T. Kapuscinski, and M. Oszust, “Data augmentation with suboptimal warping for time-series classification,” Sensors, vol. 20, no. 1, p. 98, 2019.
- D. Warchoł and M. Oszust, “Augmentation of human action datasets with suboptimal warping and representative data samples,” Sensors, vol. 22, no. 8, p. 2947, 2022.
- H. F. Stabenau, C. P. Bridge, and J. W. Waks, “Ecgaug: A novel method of generating augmented annotated electrocardiogram qrst complexes and rhythm strips,” Computers in biology and medicine, vol. 134, p. 104408, 2021.
- B. K. Iwana and S. Uchida, “Time series data augmentation for neural networks by time warping with a discriminative teacher,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 3558–3565.
- C. Oh, S. Han, and J. Jeong, “Time-series data augmentation based on interpolation,” Procedia Computer Science, vol. 175, pp. 64–71, 2020.
- X. Yang, Z. Zhang, X. Cui, and R. Cui, “A time series data augmentation method based on dynamic time warping,” in 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI). IEEE, 2021, pp. 116–120.
- Z. Li, Y. Song, R. Li, S. Gu, and X. Fan, “A novel data augmentation method for improving the accuracy of insulator health diagnosis,” Sensors, vol. 22, no. 21, p. 8187, 2022.
- P. Liu, X. Guo, P. Chen, B. Shi, T. Wo, and X. Liu, “Adaptive shapelets preservation for time series augmentation,” in 2022 International Joint Conference on Neural Networks (IJCNN). IEEE, 2022, pp. 1–8.
- E. Khalili and B. M. Asl, “Automatic sleep stage classification using temporal convolutional neural network and new data augmentation technique from raw single-channel eeg,” Computer Methods and Programs in Biomedicine, vol. 204, p. 106063, 2021.
- Y. El-Laham and S. Vyetrenko, “Styletime: Style transfer for synthetic time series generation,” in Proceedings of the Third ACM International Conference on AI in Finance, 2022, pp. 489–496.
- M. Akyash, H. Mohammadzade, and H. Behroozi, “Dtw-merge: A novel data augmentation technique for time series classification,” arXiv preprint arXiv:2103.01119, 2021.
- N. Loris and M. Gianluca, “Paci michelangelo,” Data augmentation approaches for improving animal audio classification. Ecological Informatics, vol. 57, no. 101084, pp. 10–1016, 2020.
- X. Cui, V. Goel, and B. Kingsbury, “Data augmentation for deep neural network acoustic modeling,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23, no. 9, pp. 1469–1477, 2015.
- X. Zhao, J. Solé-Casals, B. Li, Z. Huang, A. Wang, J. Cao, T. Tanaka, and Q. Zhao, “Classification of epileptic ieeg signals by cnn and data augmentation,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 926–930.
- L. Nanni, G. Maguolo, and M. Paci, “Data augmentation approaches for improving animal audio classification,” Ecological Informatics, vol. 57, p. 101084, 2020.
- D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “Specaugment: A simple data augmentation method for automatic speech recognition,” arXiv preprint arXiv:1904.08779, 2019.
- H. Cao, V. Y. Tan, and J. Z. Pang, “A parsimonious mixture of gaussian trees model for oversampling in imbalanced and multimodal time-series classification,” IEEE transactions on neural networks and learning systems, vol. 25, no. 12, pp. 2226–2239, 2014.
- Y. Kang, R. J. Hyndman, and F. Li, “Gratis: Generating time series with diverse and controllable characteristics,” Statistical Analysis and Data Mining: The ASA Data Science Journal, vol. 13, no. 4, pp. 354–376, 2020.
- F. N. Hatamian, N. Ravikumar, S. Vesal, F. P. Kemeth, M. Struck, and A. Maier, “The effect of data augmentation on classification of atrial fibrillation in short single-lead ecg signals using deep neural networks,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 1264–1268.
- G. García-Jara, P. Protopapas, and P. A. Estévez, “Improving astronomical time-series classification via data augmentation with generative adversarial networks,” The Astrophysical Journal, vol. 935, no. 1, p. 23, 2022.
- J. Albert, P. Glöckner, B. Pfitzner, and B. Arnrich, “Data augmentation of kinematic time-series from rehabilitation exercises using gans,” in 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS). IEEE, 2021, pp. 1–6.
- N. Morizet, M. Rizzato, D. Grimbert, and G. Luta, “A pilot study on the use of generative adversarial networks for data augmentation of time series,” AI, vol. 3, no. 4, pp. 789–795, 2022.
- H. Lou, Z. Qi, and J. Li, “One-dimensional data augmentation using a wasserstein generative adversarial network with supervised signal,” in 2018 Chinese Control And Decision Conference (CCDC). IEEE, 2018, pp. 1896–1901.
- G. Ramponi, P. Protopapas, M. Brambilla, and R. Janssen, “T-cgan: Conditional generative adversarial network for data augmentation in noisy time series with irregular sampling,” arXiv preprint arXiv:1811.08295, 2018.
- M. Ehrhart, B. Resch, C. Havas, and D. Niederseer, “A conditional gan for generating time series data for stress detection in wearable physiological sensor data,” Sensors, vol. 22, no. 16, p. 5969, 2022.
- G. Chen, Y. Zhu, Z. Hong, and Z. Yang, “Emotionalgan: Generating ecg to enhance emotion state classification,” in Proceedings of the 2019 International conference on artificial intelligence and computer science, 2019, pp. 309–313.
- K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” in International conference on machine learning. PMLR, 2015, pp. 1462–1471.
- Z. Yang, Y. Li, and G. Zhou, “Ts-gan: Time-series gan for sensor-based health data augmentation,” ACM Transactions on Computing for Healthcare, vol. 4, no. 2, pp. 1–21, 2023.
- X. Li, A. H. H. Ngu, and V. Metsis, “Tts-cgan: A transformer time-series conditional gan for biosignal data augmentation,” arXiv preprint arXiv:2206.13676, 2022.
- K. E. Smith and A. O. Smith, “A spectral enabled gan for time series data generation,” arXiv preprint arXiv:2103.01904, 2021.
- Z. Li, C. Ma, X. Shi, D. Zhang, W. Li, and L. Wu, “Tsa-gan: A robust generative adversarial networks for time series augmentation,” in 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021, pp. 1–8.
- K. E. Smith and A. O. Smith, “Conditional gan for timeseries generation,” arXiv preprint arXiv:2006.16477, 2020.
- D. Liu, Y. Wu, D. Hong, and S. Wang, “Time series data augmentation method of small sample based on optimized generative adversarial network,” Concurrency and Computation: Practice and Experience, vol. 34, no. 27, p. e7331, 2022.
- G. Deng, C. Han, T. Dreossi, C. Lee, and D. S. Matteson, “Ib-gan: A unified approach for multivariate time series classification under class imbalance,” in Proceedings of the 2022 SIAM International Conference on Data Mining (SDM). SIAM, 2022, pp. 217–225.
- D. Kiyasseh, G. A. Tadesse, L. Thwaites, T. Zhu, D. Clifton et al., “Plethaugment: Gan-based ppg augmentation for medical diagnosis in low-resource settings,” IEEE journal of biomedical and health informatics, vol. 24, no. 11, pp. 3226–3235, 2020.
- M. Zha, S. Wong, M. Liu, T. Zhang, and K. Chen, “Time series generation with masked autoencoder,” arXiv preprint arXiv:2201.07006, 2022.
- T. DeVries and G. W. Taylor, “Dataset augmentation in feature space,” arXiv preprint arXiv:1702.05538, 2017.
- S.-S. Park, H.-J. Kwon, J.-W. Baek, and K. Chung, “Dimensional expansion and time-series data augmentation policy for skeleton-based pose estimation,” IEEE Access, vol. 10, pp. 112 261–112 272, 2022.
- L. Alawneh, T. Alsarhan, M. Al-Zinati, M. Al-Ayyoub, Y. Jararweh, and H. Lu, “Enhancing human activity recognition using deep learning and time series augmented data,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–16, 2021.
- Q. Feng, Y. Li, and H. Wang, “Intelligent random noise modeling by the improved variational autoencoding method and its application to data augmentation,” Geophysics, vol. 86, no. 1, pp. T19–T31, 2021.
- F. J. Moreno-Barea, J. M. Jerez, and L. Franco, “Improving classification accuracy using data augmentation on small data sets,” Expert Systems with Applications, vol. 161, p. 113696, 2020.
- G. Iglesias, E. Talavera, Á. González-Prieto, A. Mozo, and S. Gómez-Canaval, “Data augmentation techniques in time series domain: a survey and taxonomy,” Neural Computing and Applications, vol. 35, no. 14, pp. 10 123–10 145, 2023.
- W.-N. Hsu, Y. Zhang, and J. Glass, “Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation,” in 2017 IEEE automatic speech recognition and understanding workshop (ASRU). IEEE, 2017, pp. 16–23.
- A. Sarkar, A. S. Raj, and R. S. Iyengar, “Neural data augmentation techniques for time series data and its benefits,” in 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2020, pp. 107–114.
- L. Minati, C. Li, J. Bartels, P. Chakraborty, Z. Li, N. Yoshimura, M. Frasca, and H. Ito, “Accelerometer time series augmentation through externally driving a non-linear dynamical system,” Chaos, Solitons & Fractals, vol. 168, p. 113100, 2023.
- Y. S. Perl, C. Pallavicini, I. P. Ipiña, M. Kringelbach, G. Deco, H. Laufs, and E. Tagliazucchi, “Data augmentation based on dynamical systems for the classification of brain states,” Chaos, Solitons & Fractals, vol. 139, p. 110069, 2020.
- J. Yeomans, S. Thwaites, W. S. Robertson, D. Booth, B. Ng, and D. Thewlis, “Simulating time-series data for improved deep neural network performance,” IEEE Access, vol. 7, pp. 131 248–131 255, 2019.
- Z. Wang, Y. Qu, J. Tao, and Y. Song, “Image-mediated data augmentation for low-resource human activity recognition,” in Proceedings of the 2019 3rd International Conference on Compute and Data Analysis, 2019, pp. 49–54.
- C. Li, H. Yang, L. Cheng, and F. Huang, “A time-series augmentation method based on empirical mode decomposition and integrated lstm neural network,” in 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2022, pp. 333–336.
- G.-H. Nam, S.-J. Bu, N.-M. Park, J.-Y. Seo, H.-C. Jo, and W.-T. Jeong, “Data augmentation using empirical mode decomposition on neural networks to classify impact noise in vehicle,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 731–735.
- R. B. Cleveland, W. S. Cleveland, J. E. McRae, and I. Terpenning, “Stl: A seasonal-trend decomposition,” J. Off. Stat, vol. 6, no. 1, pp. 3–73, 1990.
- Q. Wen, J. Gao, X. Song, L. Sun, H. Xu, and S. Zhu, “Robuststl: A robust seasonal-trend decomposition algorithm for long time series,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 5409–5416.
- C. Bergmeir, R. J. Hyndman, and J. M. Benítez, “Bagging exponential smoothing methods using stl decomposition and box–cox transformation,” International journal of forecasting, vol. 32, no. 2, pp. 303–312, 2016.
- Q. Wen, Z. Zhang, Y. Li, and L. Sun, “Fast robuststl: Efficient and robust seasonal-trend decomposition for time series with complex patterns,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 2203–2213.
- N. Nonaka and J. Seita, “Randecg: Data augmentation for deep neural network based ecg classification,” in Annual Conference of the Japanese Society for Artificial Intelligence. Springer, 2021, pp. 178–189.
- C. Li, L. Minati, K. K. Tokgoz, M. Fukawa, J. Bartels, A. Sihan, K.-I. Takeda, and H. Ito, “Integrated data augmentation for accelerometer time series in behavior recognition: roles of sampling, balancing, and fourier surrogates,” IEEE Sensors Journal, vol. 22, no. 24, pp. 24 230–24 241, 2022.
- H. Wu and H. Liu, “Non-intrusive load transient identification based on multivariate lstm neural network and time series data augmentation,” Sustainable Energy, Grids and Networks, vol. 27, p. 100490, 2021.
- Q. Ma, Z. Zheng, J. Zheng, S. Li, W. Zhuang, and G. W. Cottrell, “Joint-label learning by dual augmentation for time series classification,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, 2021, pp. 8847–8855.
- E. Fons, P. Dawson, X.-j. Zeng, J. Keane, and A. Iosifidis, “Adaptive weighting scheme for automatic time-series data augmentation,” arXiv preprint arXiv:2102.08310, 2021.
- T.-H. Cheung and D.-Y. Yeung, “Modals: Modality-agnostic automated data augmentation in the latent space,” in International Conference on Learning Representations, 2020.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. pmlr, 2015, pp. 448–456.
- X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2010, pp. 249–256.
- Y. Singh, P. K. Bhatia, and O. Sangwan, “A review of studies on machine learning techniques,” International Journal of Computer Science and Security, vol. 1, no. 1, pp. 70–84, 2007.