Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process
Abstract: Integrating machine learning into Automated Control Systems (ACS) enhances decision-making in industrial process management. One of the limitations to the widespread adoption of these technologies in industry is the vulnerability of neural networks to adversarial attacks. This study explores the threats in deploying deep learning models for fault diagnosis in ACS using the Tennessee Eastman Process dataset. By evaluating three neural networks with different architectures, we subject them to six types of adversarial attacks and explore five different defense methods. Our results highlight the strong vulnerability of models to adversarial samples and the varying effectiveness of defense strategies. We also propose a novel protection approach by combining multiple defense methods and demonstrate it's efficacy. This research contributes several insights into securing machine learning within ACS, ensuring robust fault diagnosis in industrial processes.
- M. Bertolini, D. Mezzogori, M. Neroni, and F. Zammori, “Machine learning for industrial applications: A comprehensive literature review,” Expert Systems with Applications, vol. 175, p. 114820, 2021.
- J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers & chemical engineering, vol. 17, no. 3, pp. 245–255, 1993.
- Y.-J. Park, S.-K. S. Fan, and C.-Y. Hsu, “A review on fault detection and process diagnostics in industrial processes,” Processes, vol. 8, no. 9, p. 1123, 2020.
- M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha, “Systematic poisoning attacks on and defenses for machine learning in healthcare,” IEEE journal of biomedical and health informatics, vol. 19, no. 6, pp. 1893–1905, 2014.
- B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp. 387–402. Springer, 2013.
- L. Chen, Y. Ye, and T. Bourlai, “Adversarial machine learning in malware detection: Arms race between evasion attack and defense,” in 2017 European intelligence and security informatics conference (EISIC), pp. 99–106. IEEE, 2017.
- M. A. Ayub, W. A. Johnson, D. A. Talbert, and A. Siraj, “Model evasion attack on intrusion detection systems using adversarial machine learning,” in 2020 54th annual conference on information sciences and systems (CISS), pp. 1–6. IEEE, 2020.
- M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE symposium on security and privacy (SP), pp. 19–35. IEEE, 2018.
- Z. Tian, L. Cui, J. Liang, and S. Yu, “A comprehensive survey on poisoning attacks and countermeasures in machine learning,” ACM Computing Surveys, vol. 55, no. 8, pp. 1–35, 2022.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP), pp. 3–18. IEEE, 2017.
- H. Hu, Z. Salcic, L. Sun, G. Dobbie, P. S. Yu, and X. Zhang, “Membership inference attacks on machine learning: A survey,” ACM Computing Surveys (CSUR), vol. 54, no. 11s, pp. 1–37, 2022.
- D. S. Shaikhelislamov, K. Lukyanov, N. N. Severin, M. D. Drobyshevskiy, I. A. Makarov, and D. Y. Turdakov, “A study of graph neural networks for link prediction on vulnerability to membership attacks,” journal of mathematical sciences, vol. 530, no. 0, pp. 113–127, 2023.
- M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 acm sigsac conference on computer and communications security, pp. 1528–1540, 2016.
- Y. Wang, J. Liu, X. Chang, R. J. Rodríguez, and J. Wang, “Di-aa: An interpretable white-box attack for fooling deep neural networks,” Information Sciences, vol. 610, pp. 14–32, 2022.
- T. Wang and F. Kerschbaum, “Robust and undetectable white-box watermarks for deep neural networks,” arXiv preprint arXiv:1910.14268, vol. 1, no. 2, 2019.
- A. N. Bhagoji, W. He, B. Li, and D. Song, “Exploring the space of black-box attacks on deep neural networks,” arXiv preprint arXiv:1712.09491, 2017.
- F. Zhang, S. P. Chowdhury, and M. Christakis, “Deepsearch: A simple and effective blackbox attack for deep neural networks,” in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 800–812, 2020.
- Y. Li, L. Li, L. Wang, T. Zhang, and B. Gong, “Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks,” in International Conference on Machine Learning, pp. 3866–3876. PMLR, 2019.
- A. N. Bhagoji, W. He, B. Li, and D. Song, “Practical black-box attacks on deep neural networks using efficient query mechanisms,” in Proceedings of the European conference on computer vision (ECCV), pp. 154–169, 2018.
- Y. Xiang, Y. Xu, Y. Li, W. Ma, Q. Xuan, and Y. Liu, “Side-channel gray-box attack for dnns,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 68, no. 1, pp. 501–505, 2020.
- R. Al-qudah, M. Aloqaily, B. Ouni, M. Guizani, and T. Lestable, “An incremental gray-box physical adversarial attack on neural network training,” arXiv preprint arXiv:2303.01245, 2023.
- B. Vivek, K. R. Mopuri, and R. V. Babu, “Gray-box adversarial training,” in Proceedings of the European conference on computer vision (ECCV), pp. 203–218, 2018.
- D. J. Miller, Z. Xiang, and G. Kesidis, “Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks,” Proc. IEEE, vol. 108, no. 3, pp. 402–433, 2020.
- A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, “Adversarial example detection for dnn models: A review and experimental comparison,” Artificial Intelligence Review, vol. 55, no. 6, pp. 4403–4462, 2022.
- P. Laykaviriyakul and E. Phaisangittisagul, “Collaborative defense-gan for protecting adversarial attacks on classification system,” Expert Systems with Applications, vol. 214, p. 118957, 2023.
- R. Hou, S. Ai, Q. Chen, H. Yan, T. Huang, and K. Chen, “Similarity-based integrity protection for deep learning systems,” Information Sciences, vol. 601, pp. 255–267, 2022.
- G. Zizzo, C. Hankin, S. Maffeis, and K. Jones, “Adversarial machine learning beyond the image domain,” in Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1–4, 2019.
- G. Zizzo, C. Hankin, S. Maffeis, and K. Jones, “Adversarial attacks on time-series intrusion detection for industrial control systems,” in 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 899–910. IEEE, 2020.
- F. Specht, J. Otto, O. Niggemann, and B. Hammer, “Generation of adversarial examples to prevent misclassification of deep neural network based condition monitoring systems for cyber-physical production systems,” in 2018 IEEE 16th International Conference on Industrial Informatics (INDIN), pp. 760–765. IEEE, 2018.
- M. Kravchik and A. Shabtai, “Efficient cyber attack detection in industrial control systems using lightweight neural networks and pca,” IEEE transactions on dependable and secure computing, vol. 19, no. 4, pp. 2179–2197, 2021.
- A. Hamdi, M. Müller, and B. Ghanem, “Sada: semantic adversarial diagnostic attacks for autonomous applications,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 10 901–10 908, 2020.
- Á. L. P. Gómez, L. F. Maimó, A. H. Celdrán, F. J. G. Clemente, and F. Cleary, “Crafting adversarial samples for anomaly detectors in industrial control systems,” Procedia Computer Science, vol. 184, pp. 573–580, 2021.
- Á. L. P. Gómez, L. F. Maimó, F. J. G. Clemente, J. A. M. Morales, A. H. Celdrán, and G. Bovet, “A methodology for evaluating the robustness of anomaly detectors to adversarial attacks in industrial scenarios,” Ieee Access, vol. 10, pp. 124 582–124 594, 2022.
- Y. Zhuo, Z. Yin, and Z. Ge, “Attack and defense: Adversarial security of data-driven fdc systems,” IEEE Transactions on Industrial Informatics, vol. 19, no. 1, pp. 5–19, 2022.
- G. Li, K. Ota, M. Dong, J. Wu, and J. Li, “Desvig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems,” IEEE Transactions on Industrial Informatics, vol. 16, no. 5, pp. 3267–3277, 2019.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
- N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp), pp. 39–57. Ieee, 2017.
- S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582, 2016.
- A. Sinha, Z. Chen, V. Badrinarayanan, and A. Rabinovich, “Gradient adversarial training of neural networks,” arXiv preprint arXiv:1806.08028, 2018.
- M. Balunovic and M. Vechev, “Adversarial training and provable defenses: Bridging the gap,” in International Conference on Learning Representations, 2019.
- C. Reinartz, M. Kulahci, and O. Ravn, “An extended tennessee eastman simulation dataset for fault-detection and decision support systems,” Computers & Chemical Engineering, vol. 149, p. 107281, 2021.
- C. Lessmeier, J. K. Kimotho, D. Zimmer, and W. Sextro, “Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data set for data-driven classification,” in PHM Society European Conference, vol. 3, no. 1, 2016.
- T. Khoualdia, A. Lakehal, Z. Chelli, K. Khoualdia, and K. Nessaib, “Optimized multi layer perceptron artificial neural network based fault diagnosis of induction motor using vibration signals,” Diagnostyka, vol. 22, 2021.
- M. Z. Ali, M. N. S. K. Shabbir, X. Liang, Y. Zhang, and T. Hu, “Machine learning-based fault diagnosis for single-and multi-faults in induction motors using measured stator currents and vibration signals,” IEEE Transactions on Industry Applications, vol. 55, no. 3, pp. 2378–2391, 2019.
- M. Unal, M. Onat, M. Demetgul, and H. Kucuk, “Fault diagnosis of rolling bearings using a genetic algorithm optimized neural network,” Measurement, vol. 58, pp. 187–196, 2014.
- V. N. Ghate and S. V. Dudul, “Optimal mlp neural network classifier for fault detection of three phase induction motor,” Expert Systems with Applications, vol. 37, no. 4, pp. 3468–3481, 2010.
- C. Li, C. Shen, H. Zhang, H. Sun, and S. Meng, “A novel temporal convolutional network via enhancing feature extraction for the chiller fault diagnosis,” Journal of Building Engineering, vol. 42, p. 103014, 2021.
- H. Zhang, B. Ge, and B. Han, “Real-time motor fault diagnosis based on tcn and attention,” Machines, vol. 10, no. 4, p. 249, 2022.
- J. Zhang, Y. Wang, J. Tang, J. Zou, and S. Fan, “Ms-tcn: A multiscale temporal convolutional network for fault diagnosis in industrial processes,” in 2021 American Control Conference (ACC), pp. 1601–1606. IEEE, 2021.
- S. Bai, J. Z. Kolter, and V. Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271, 2018.
- J. Yuan and Y. Tian, “An intelligent fault diagnosis method using gru neural network towards sequential data in dynamic processes,” Processes, vol. 7, no. 3, p. 152, 2019.
- I. Lomov, M. Lyubimov, I. Makarov, and L. E. Zhukov, “Fault detection in tennessee eastman process with temporal deep learning models,” Journal of Industrial Information Integration, vol. 23, p. 100216, 2021.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.