LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection (2401.16001v1)
Abstract: Deep learning methods can not only detect false data injection attacks (FDIA) but also locate attacks of FDIA. Although adversarial false data injection attacks (AFDIA) based on deep learning vulnerabilities have been studied in the field of single-label FDIA detection, the adversarial attack and defense against multi-label FDIA locational detection are still not involved. To bridge this gap, this paper first explores the multi-label adversarial example attacks against multi-label FDIA locational detectors and proposes a general multi-label adversarial attack framework, namely muLti-labEl adverSarial falSe data injectiON attack (LESSON). The proposed LESSON attack framework includes three key designs, namely Perturbing State Variables, Tailored Loss Function Design, and Change of Variables, which can help find suitable multi-label adversarial perturbations within the physical constraints to circumvent both Bad Data Detection (BDD) and Neural Attack Location (NAL). Four typical LESSON attacks based on the proposed framework and two dimensions of attack objectives are examined, and the experimental results demonstrate the effectiveness of the proposed attack framework, posing serious and pressing security concerns in smart grids.
- M. A. Rahman and A. Datta, “Impact of stealthy attacks on optimal power flow: A simulink-driven formal analysis,” IEEE Transactions on Dependable and Secure Computing, vol. 17, no. 3, pp. 451–464, 2020.
- H. Zhang, B. Liu, X. Liu, A. Pahwa, and H. Wu, “Voltage stability constrained moving target defense against net load redistribution attacks,” IEEE Transactions on Smart Grid, vol. 13, no. 5, pp. 3748–3759, 2022.
- M. Liu, C. Zhao, J. Xia, R. Deng, P. Cheng, and J. Chen, “PDDL: Proactive distributed detection and localization against stealthy deception attacks in DC microgrids,” IEEE Transactions on Smart Grid, vol. 14, no. 1, pp. 714–731, 2023.
- G. Cheng, Y. Lin, J. Zhao, and J. Yan, “A highly discriminative detector against false data injection attacks in AC state estimation,” IEEE Transactions on Smart Grid, vol. 13, no. 3, pp. 2318–2330, 2022.
- M. Farajzadeh-Zanjani, E. Hallaji, R. Razavi-Far, and M. Saif, “Generative-adversarial class-imbalance learning for classifying cyber-attacks and faults - a cyber-physical power system,” IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 6, pp. 4068–4081, 2022.
- J. Tian, B. Wang, T. Li, F. Shang, K. Cao, and R. Guo, “TOTAL: Optimal protection strategy against perfect and imperfect false data injection attacks on power grid cyber-physical systems,” IEEE Internet of Things Journal, vol. 8, no. 2, pp. 1001–1015, 2021.
- A. S. Musleh, G. Chen, and Z. Y. Dong, “A survey on the detection algorithms for false data injection attacks in smart grids,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2218–2234, 2020.
- S. Wang, S. Bi, and Y. J. A. Zhang, “Locational detection of the false data injection attack in a smart grid: A multilabel classification approach,” IEEE Internet of Things Journal, vol. 7, no. 9, pp. 8218–8227, 2020.
- P. Srinivasan.V, Balasubadra.K, Saravanan.K, Arjun.V.S, and M. and, “Multi label deep learning classification approach for false data injection attacks in smart grid,” KSII Transactions on Internet and Information Systems, vol. 15, no. 6, pp. 2168–2187, June 2021.
- D. Mukherjee, S. Chakraborty, and S. Ghosh, “Deep learning-based multilabel classification for locational detection of false data injection attack in smart grids,” Electrical Engineering, vol. 104, no. 1, pp. 259–282, Feb 2022.
- D. Mukherjee, “A novel strategy for locational detection of false data injection attack,” Sustainable Energy, Grids and Networks, vol. 31, p. 100702, 2022.
- H. I. Hegazy, A. S. Tag Eldien, M. M. Tantawy, M. M. Fouda, and H. A. TagElDien, “Real-time locational detection of stealthy false data injection attack in smart grid: Using multivariate-based multi-label classification approach,” Energies, vol. 15, no. 14, 2022.
- Z. Qin and Y. Lai, “Detection and localization of coordinated state-and-topology false data injection attack by multi-modal learning,” Journal of Electrical Engineering & Technology, vol. 17, no. 5, pp. 2649–2662, Sep 2022.
- O. Boyaci, M. R. Narimani, K. R. Davis, M. Ismail, T. J. Overbye, and E. Serpedin, “Joint detection and localization of stealth false data injection attacks in smart grids using graph neural networks,” IEEE Transactions on Smart Grid, vol. 13, no. 1, pp. 807–819, 2022.
- Y. Han, H. Feng, K. Li, and Q. Zhao, “False data injection attacks detection with modified temporal multi-graph convolutional network in smart grids,” Computers & Security, vol. 124, p. 103016, 2023.
- Y. Chen, Y. Tan, and D. Deka, “Is machine learning in power systems vulnerable?” in SmartGridComm 2018, 2018, pp. 1–6.
- C. Ren, X. Du, Y. Xu, Q. Song, Y. Liu, and R. Tan, “Vulnerability analysis, robustness verification, and mitigation strategy for machine learning-based power system stability assessment model under adversarial examples,” IEEE Transactions on Smart Grid, vol. 13, no. 2, pp. 1622–1632, 2022.
- Y. Cheng, K. Yamashita, and N. Yu, “Adversarial attacks on deep neural network-based power system event classification models,” in 2022 IEEE PES Innovative Smart Grid Technologies - Asia (ISGT Asia), 2022, pp. 66–70.
- J. Tian, B. Wang, Z. Wang, K. Cao, J. Li, and M. Ozay, “Joint adversarial example and false data injection attacks for state estimation in power systems,” IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13 699–13 713, 2022.
- J. Tian, B. Wang, J. Li, Z. Wang, B. Ma, and M. Ozay, “Exploring targeted and stealthy false data injection attacks via adversarial machine learning,” IEEE Internet of Things Journal, vol. 9, no. 15, pp. 14 116–14 125, 2022.
- J. Li, Y. Yang, J. S. Sun, K. Tomsovic, and H. Qi, “ConAML: Constrained adversarial machine learning for cyber-physical systems,” in ACM ASIACCS 2021, 2021, pp. 52–66.
- X. Niu, J. Li, J. Sun, and K. Tomsovic, “Dynamic detection of false data injection attack in smart grid using deep learning,” in 2019 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), 2019, pp. 1–6.
- Y. He, G. J. Mendis, and J. Wei, “Real-time detection of false data injection attacks in smart grid: A deep learning-based intelligent mechanism,” IEEE Transactions on Smart Grid, vol. 8, no. 5, pp. 2505–2516, 2017.
- X. Yin, Y. Zhu, and J. Hu, “A subgrid-oriented privacy-preserving microservice framework based on deep neural network for false data injection attack detection in smart grids,” IEEE Transactions on Industrial Informatics, vol. 18, no. 3, pp. 1957–1967, 2022.
- X. Yin, Y. Zhu, Y. Xie, and J. Hu, “PowerFDNet: Deep learning-based stealthy false data injection attack detection for AC-model transmission systems,” IEEE Open Journal of the Computer Society, vol. 3, pp. 149–161, 2022.
- B. Xu, F. Guo, C. Wen, R. Deng, and W.-A. Zhang, “Detecting false data injection attacks in smart grids with modeling errors: A deep transfer learning based approach,” arXiv:2104.06307, apr 2021.
- Y. Li, X. Wei, Y. Li, Z. Dong, and M. Shahidehpour, “Detection of false data injection attacks in smart grid: a secure federated deep learning approach,” IEEE Transactions on Smart Grid, vol. 13, no. 6, pp. 4862–4872, 2022.
- J. Tian, B. Wang, R. Guo, Z. Wang, K. Cao, and X. Wang, “Adversarial attacks and defenses for deep-learning-based unmanned aerial vehicles,” IEEE Internet of Things Journal, vol. 9, no. 22, pp. 22 399–22 409, 2022.
- J. Lian, X. Wang, Y. Su, M. Ma, and S. Mei, “CBA: Contextual background attack against optical aerial detection in the physical world,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–16, 2023.
- J. Tian, B. Wang, J. Li, and Z. Wang, “Adversarial attacks and defense for CNN based power quality recognition in smart grid,” IEEE Transactions on Network Science and Engineering, vol. 9, no. 2, pp. 807–819, 2022.
- J. Lian, S. Mei, S. Zhang, and M. Ma, “Benchmarking adversarial patch against aerial detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–16, 2022.
- K. Tang, Y. Shi, T. Lou, W. Peng, X. He, P. Zhu, Z. Gu, and Z. Tian, “Rethinking perturbation directions for imperceptible adversarial attacks on point clouds,” IEEE Internet of Things Journal, vol. 10, no. 6, pp. 5158–5169, 2023.
- P. Zhu, J. Hong, X. Li, K. Tang, and Z. Wang, “SGMA: a novel adversarial attack approach with improved transferability,” Complex & Intelligent Systems, pp. 1–13, 2023.
- J. Ruan, Q. Wang, S. Chen, H. Lyu, G. Liang, J. Zhao, and Z. Y. Dong, “On vulnerability of renewable energy forecasting: Adversarial learning attacks,” IEEE Transactions on Industrial Informatics, pp. 1–14, 2023.
- M. M. Badr, M. M. E. A. Mahmoud, M. Abdulaal, A. J. Aljohani, F. Alsolami, and A. Balamsh, “A novel evasion attack against global electricity theft detectors and a countermeasure,” IEEE Internet of Things Journal, vol. 10, no. 12, pp. 11 038–11 053, 2023.
- Q. Song, R. Tan, C. Ren, Y. Xu, Y. Lou, J. Wang, and H. B. Gooi, “On credibility of adversarial examples against learning-based grid voltage stability assessment,” IEEE Transactions on Dependable and Secure Computing, pp. 1–14, 2022.
- T. Zhao, M. Yue, and J. Wang, “Robust power system stability assessment against adversarial machine learning-based cyberattacks via online purification,” IEEE Transactions on Power Systems, vol. 38, no. 6, pp. 5613–5622, 2023.
- M. Wu, R. Roy, P. Serna Torre, and P. Hidalgo-Gonzalez, “Effectiveness of learning algorithms with attack and defense mechanisms for power systems,” Electric Power Systems Research, vol. 212, p. 108598, 2022.
- Y. Wu and J. Wei-Kocsis, “A practical and stealthy adversarial attack for cyber-physical applications,” in The AAAI-22 Workshop on Adversarial Machine Learning and Beyond, 2022, pp. 1–8.
- J. Li, Y. Yang, J. S. Sun, K. Tomsovic, and H. Qi, “Towards adversarial-resilient deep neural networks for false data injection attack detection in power grids,” in 2023 32nd International Conference on Computer Communications and Networks (ICCCN), 2023, pp. 1–10.
- T. Liu and T. Shu, “On the security of ANN-based AC state estimation in smart grid,” Computers & Security, vol. 105, p. 102265, 2021.
- Q. Song, H. Jin, X. Huang, and X. Hu, “Multi-label adversarial perturbations,” in 2018 IEEE International Conference on Data Mining (ICDM), 2018, pp. 1242–1247.
- N. Zhou, W. Luo, X. Lin, P. Xu, and Z. Zhang, “Generating multi-label adversarial examples by linear programming,” in 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–8.
- N. Zhou, W. Luo, J. Zhang, L. Kong, and H. Zhang, “Hiding all labels for multi-label images: An empirical study of adversarial examples,” in 2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1–8.
- S. Hu, L. Ke, X. Wang, and S. Lyu, “TkML-AP: Adversarial attacks to top-k multi-label learning,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 7629–7637.
- Z. Yang, Y. Han, and X. Zhang, “Characterizing the evasion attackability of multi-label classifiers,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 647–10 655.
- L. Kong, W. Luo, H. Zhang, Y. Liu, and Y. Shi, “Evolutionary multilabel adversarial examples: An effective black-box attack,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 3, pp. 562–572, 2023.
- S. Melacci, G. Ciravegna, A. Sotgiu, A. Demontis, B. Biggio, M. Gori, and F. Roli, “Domain knowledge alleviates adversarial attacks in multi-label classifiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 9944–9959, 2022.
- J. Jia, W. Qu, and N. Gong, “Multiguard: Provably robust multi-label classification against adversarial examples,” Advances in Neural Information Processing Systems, vol. 35, pp. 10 150–10 163, 2022.
- Y. Liu, P. Ning, and M. K. Reiter, “False data injection attacks against state estimation in electric power grids,” ACM Transactions on Information and System Security (TISSEC), vol. 14, no. 1, p. 13, 2011.
- B. Gao and L. Pavel, “On the properties of the softmax function with application in game theory and reinforcement learning,” arXiv preprint arXiv:1704.00805, 2017.
- M.-L. Zhang and Z.-H. Zhou, “A review on multi-label learning algorithms,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 8, pp. 1819–1837, 2014.
- N. Baracaldo and J. Joshi, “An adaptive risk management and access control framework to mitigate insider threats,” Computers & Security, vol. 39, pp. 237–254, 2013.
- R. D. Zimmerman, C. E. Murillo-Sánchez, and R. J. Thomas, “Matpower: Steady-state operations, planning, and analysis tools for power systems research and education,” IEEE Transactions on Power Systems, vol. 26, no. 1, pp. 12–19, 2010.
- S. Ahmed, Y. Lee, S. Hyun, and I. Koo, “Unsupervised machine learning-based detection of covert data integrity assault in smart grid networks utilizing isolation forest,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 10, pp. 2765–2777, 2019.
- M. Esmalifalak, L. Liu, N. Nguyen, R. Zheng, and Z. Han, “Detecting stealthy false data injection using machine learning in smart grid,” IEEE Systems Journal, vol. 11, no. 3, pp. 1644–1652, 2014.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in 6th International Conference on Learning Representations, ICLR, 2018, pp. 1–23.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in ACM ASIACCS 2017, 2017, pp. 506–519.
- A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
- F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
- N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 582–597.