TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack (2209.06388v4)
Abstract: Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, which cause real-life adversarial attacks that undermine the robustness of AI models. To date, most existing attacks target at feed-forward NNs and image recognition tasks, but they cannot perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, the high visual sensitivity of time series to perturbations also poses challenges to local objective optimization of adversarial samples. In this paper, we propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC. The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution. Based on this, we reduce the adversarial attack problem to a multi-objective optimization problem that enhances the perturbation quality. Furthermore, to speed up the optimization process, we propose to use a representation model for RNN to capture deeply embedded vulnerable samples whose features deviate from the latent manifold. Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks in terms of effectiveness, efficiency and imperceptibility from various perspectives including standard measure, human study and real-world defense.
- Square attack: A query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision (ECCV), 2020.
- G. Arulmozhi. Statistics For Management, 2nd Edition. Tata McGraw-Hill Education, 2009. ISBN 9780070153684.
- Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML), 2018.
- The uea multivariate time series classification archive. arXiv preprint arXiv:1811.00075, 2018.
- Dynamic time warping based adversarial framework for time-series domain. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022.
- Evasion attacks against machine learning at test time. In European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD). Springer, 2013.
- A review on outlier/anomaly detection in time series data. ACM Computing Surveys (CSUR), 2021.
- Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations (ICLR), 2018.
- Lof: Identifying density-based local outliers. In International Conference on Management of Data (SIGMOD), 2000.
- Skip rnn: Learning to skip state updates in recurrent neural networks. In International Conference on Learning Representations (ICLR), 2018.
- N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (S&P), 2017.
- Filternet: A many-to-many deep learning architecture for time series classification. Sensors, 2020.
- Hopskipjumpattack: A query-efficient decision-based attack. In IEEE Symposium on Security and Privacy (S&P), 2020.
- Classification of long sequential data using circular dilated convolutional neural networks. Neurocomputing, 2023.
- F. Chollet. Deep learning with Python. Simon and Schuster, 2021.
- F. Croce and M. Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning (ICML), 2020.
- J. M. Danskin. The theory of max-min and its application to weapons allocation problems. Springer Science & Business Media, 2012.
- The ucr time series archive. IEEE/CAA Journal of Automatica Sinica, 2019.
- Imagenet: A large-scale hierarchical image database. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
- Towards backdoor attack on deep learning based time series classification. In International Conference on Data Engineering (ICDE). IEEE, 2022.
- Black-box adversarial attack on time series classification. In AAAI Conference on Artificial Intelligence (AAAI), 2023.
- T. Ergen and S. S. Kozat. Unsupervised anomaly detection with lstm neural networks. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2019.
- Adversarial attacks on deep neural networks for time series classification. In International Joint Conference on Neural Networks (IJCNN). IEEE, 2019.
- Testing the manifold hypothesis. Journal of the American Mathematical Society, 2016.
- A. H. Galib and B. Bashyal. On the susceptibility and robustness of time series models through adversarial attack and defense. arXiv preprint arXiv:2301.03703, 2023.
- Low latency rnn inference with cellular batching. In European Conference on Computer Systems (EuroSys), 2018.
- Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
- A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020.
- Deep learning for time series classification: a review. Data Mining and Knowledge Discovery (DMKD), 2019.
- Adversarial attacks on time series. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
- Z. Kolter and A. Madry. Tutorial adversarial robustness: Theory and practice. In Advances in Neural Information Processing Systems (NeurIPS). 2018.
- S. Kotyan and D. V. Vargas. Adversarial robustness assessment: Why both l0subscript𝑙0l_{0}italic_l start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and l∞subscript𝑙l_{\infty}italic_l start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT attacks are necessary. arXiv preprint arXiv:1906.06026, 2019.
- Adversarial examples in the physical world. In Workshop Track @ International Conference on Learning Representations (ICLR), 2017.
- Tods: An automated time series outlier detection system. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters (PRL), 2014.
- A review of adversarial attack and defense for classification methods. The American Statistician, 2022.
- Medical time series classification with hierarchical attention-based temporal convolutional networks: A case study of myotonic dystrophy diagnosis. In CVPR Workshops, 2019.
- Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data (TKDD), 2012.
- Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
- N. K. Manaswi. Rnn and lstm. Deep Learning with Applications Using Python: Chatbots and Face, Object, and Speech Recognition with TensorFlow and Keras, 2018.
- S. Marcel and Y. Rodriguez. Torchvision the machine-vision package of torch. In ACM International Conference on Multimedia (ACM MM), 2010.
- Deepfool: a simple and accurate method to fool deep neural networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
- M. C. Mozer. A focused backpropagation algorithm for temporal pattern recognition. In Backpropagation. Psychology Press, 2013.
- Adversarial robustness toolbox v1.0.0. arXiv preprint arXiv:1807.01069, 2018.
- Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016a.
- The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS&P), 2016b.
- Crafting adversarial input sequences for recurrent neural networks. In IEEE Military Communications Conference (MILCOM), 2016c.
- Practical black-box attacks against machine learning. In ACM Asia Conference on Computer and Communications Security (AsiaCCS), 2017.
- Anomaly detection in time series: a comprehensive evaluation. International Conference on Very Large Data Bases (VLDB), 2022.
- Estimating the support of a high-dimensional distribution. Neural Computation, 2001.
- One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation (TEC), 2019.
- Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
- Small perturbations are enough: Adversarial attacks on time series prediction. Information Sciences, 2022.
- Side-channel gray-box attack for dnns. IEEE Transactions on Circuits and Systems II: Express Briefs, 2020.
- Stock price prediction using time convolution long short-term memory network. In International Conference on Knowledge Science, Engineering and Management (KSEM). Springer, 2018.
- Geometry-aware instance-reweighted adversarial training. In International Conference on Learning Representations (ICLR), 2020.
- Architectural complexity measures of recurrent neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2016.
- Decision-guided weighted automata extraction from recurrent neural networks. In AAAI Conference on Artificial Intelligence (AAAI), 2021a.
- Adversarial perturbation defense on deep neural networks. ACM Computing Surveys (CSUR), 2021b.