Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SATBA: An Invisible Backdoor Attack Based On Spatial Attention (2302.13056v3)

Published 25 Feb 2023 in cs.CR and cs.CV

Abstract: Backdoor attack has emerged as a novel and concerning threat to AI security. These attacks involve the training of Deep Neural Network (DNN) on datasets that contain hidden trigger patterns. Although the poisoned model behaves normally on benign samples, it exhibits abnormal behavior on samples containing the trigger pattern. However, most existing backdoor attacks suffer from two significant drawbacks: their trigger patterns are visible and easy to detect by backdoor defense or even human inspection, and their injection process results in the loss of natural sample features and trigger patterns, thereby reducing the attack success rate and model accuracy. In this paper, we propose a novel backdoor attack named SATBA that overcomes these limitations using spatial attention and an U-net based model. The attack process begins by using spatial attention to extract meaningful data features and generate trigger patterns associated with clean images. Then, an U-shaped model is used to embed these trigger patterns into the original data without causing noticeable feature loss. We evaluate our attack on three prominent image classification DNN across three standard datasets. The results demonstrate that SATBA achieves high attack success rate while maintaining robustness against backdoor defenses. Furthermore, we conduct extensive image similarity experiments to emphasize the stealthiness of our attack strategy. Overall, SATBA presents a promising approach to backdoor attack, addressing the shortcomings of previous methods and showcasing its effectiveness in evading detection and maintaining high attack success rate.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. R. Hammouche, A. Attia, S. Akhrouf, and Z. Akhtar, “Gabor filter bank with deep autoencoder based face recognition system,” Expert Systems with Applications, p. 116743, 2022.
  2. H. Wang, Y. Xu, Z. Wang, Y. Cai, L. Chen, and Y. Li, “Centernet-auto: A multi-object visual detection algorithm for autonomous driving scenes based on improved centernet,” IEEE Transactions on Emerging Topics in Computational Intelligence, 2023.
  3. G. Nijaguna, J. A. Babu, B. Parameshachari, R. P. de Prado, and J. Frnda, “Quantum fruit fly algorithm and resnet50-vgg16 for medical diagnosis,” Applied Soft Computing, p. 110055, 2023.
  4. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  5. T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” IEEE Access, vol. 7, pp. 47230–47244, 2019.
  6. Y. Li, Y. Bai, Y. Jiang, Y. Yang, S.-T. Xia, and B. Li, “Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection,” Advances in Neural Information Processing Systems, vol. 35, pp. 13238–13250, 2022.
  7. T. Wang, Y. Yao, F. Xu, S. An, H. Tong, and T. Wang, “An invisible black-box backdoor attack through frequency domain,” in European Conference on Computer Vision, pp. 396–413, Springer, 2022.
  8. S. Hong, N. Carlini, and A. Kurakin, “Handcrafted backdoors in deep neural networks,” Advances in Neural Information Processing Systems, vol. 35, pp. 8068–8080, 2022.
  9. H. Chen, C. Fu, J. Zhao, and F. Koushanfar, “Proflip: Targeted trojan attack with progressive bit flips,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7718–7727, 2021.
  10. X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.
  11. R. Ning, J. Li, C. Xin, and H. Wu, “Invisible poison: A blackbox clean label backdoor attack to deep neural networks,” in IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1–10, IEEE, 2021.
  12. M. Xue, X. Wang, S. Sun, Y. Zhang, J. Wang, and W. Liu, “Compression-resistant backdoor attack against deep neural networks,” Applied Intelligence, pp. 1–16, 2023.
  13. T. Wu, T. Wang, V. Sehwag, S. Mahloujifar, and P. Mittal, “Just rotate it: Deploying backdoor attacks via rotation transformation,” in Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, pp. 91–102, 2022.
  14. Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible backdoor attack with sample-specific triggers,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 16463–16472, 2021.
  15. A. Nguyen and A. Tran, “Wanet–imperceptible warping-based backdoor attack,” arXiv preprint arXiv:2102.10369, 2021.
  16. M. Xue, C. He, Y. Wu, S. Sun, Y. Zhang, J. Wang, and W. Liu, “Ptb: Robust physical backdoor attacks against deep neural networks in real world,” Computers & Security, vol. 118, p. 102726, 2022.
  17. Y. Li, T. Zhai, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor attack in the physical world,” arXiv preprint arXiv:2104.02361, 2021.
  18. M. Xue, C. He, S. Sun, J. Wang, and W. Liu, “Robust backdoor attacks against deep neural networks in real physical world,” in 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 620–626, IEEE, 2021.
  19. F. Zhao, L. Zhou, Q. Zhong, R. Lan, L. Y. Zhang, et al., “Natural backdoor attacks on deep neural networks via raindrops,” Security and Communication Networks, vol. 2022, 2022.
  20. E. Wenger, J. Passananti, A. N. Bhagoji, Y. Yao, H. Zheng, and B. Y. Zhao, “Backdoor attacks against deep learning systems in the physical world,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6206–6215, 2021.
  21. M. Fan, Y. Liu, C. Chen, X. Liu, and W. Guo, “Defense against backdoor attacks via identifying and purifying bad neurons,” arXiv preprint arXiv:2208.06537, 2022.
  22. D. Wu and Y. Wang, “Adversarial neuron pruning purifies backdoored deep models,” Advances in Neural Information Processing Systems, vol. 34, pp. 16913–16925, 2021.
  23. F.-Q. Li, S.-L. Wang, and Z.-H. Wang, “Protecting deep cerebrospinal fluid cell image processing models with backdoor and semi-distillation,” in 2021 Digital Image Computing: Techniques and Applications (DICTA), pp. 01–07, IEEE, 2021.
  24. R. Zheng, R. Tang, J. Li, and L. Liu, “Data-free backdoor removal based on channel lipschitzness,” in European Conference on Computer Vision, pp. 175–191, Springer, 2022.
  25. J. Xia, T. Wang, J. Ding, X. Wei, and M. Chen, “Eliminating backdoor triggers for deep neural networks using attention relation graph distillation,” arXiv preprint arXiv:2204.09975, 2022.
  26. Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma, “Neural attention distillation: Erasing backdoor triggers from deep neural networks,” arXiv preprint arXiv:2101.05930, 2021.
  27. C. Yang, “Detecting backdoored neural networks with structured adversarial attacks,” 2021.
  28. P. Xia, H. Niu, Z. Li, and B. Li, “A statistical difference reduction method for escaping backdoor detection.,” CoRR, 2021.
  29. G. Shen, Y. Liu, G. Tao, S. An, Q. Xu, S. Cheng, S. Ma, and X. Zhang, “Backdoor scanning for deep neural networks through k-arm optimization,” in International Conference on Machine Learning, pp. 9525–9536, PMLR, 2021.
  30. B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 707–723, IEEE, 2019.
  31. Y. Dong, X. Yang, Z. Deng, T. Pang, Z. Xiao, H. Su, and J. Zhu, “Black-box detection of backdoor attacks with limited information and data,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16482–16491, 2021.
  32. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241, Springer, 2015.
  33. V. Mnih, N. Heess, A. Graves, et al., “Recurrent models of visual attention,” Advances in neural information processing systems, vol. 27, 2014.
  34. Y. Pang, Y. Yuan, X. Li, and J. Pan, “Efficient hog human detection,” Signal processing, vol. 91, no. 4, pp. 773–781, 2011.
  35. A. Gebejes and R. Huertas, “Texture characterization based on grey-level co-occurrence matrix,” Databases, vol. 9, no. 10, pp. 375–378, 2013.
  36. A. M. Lis, K. M. Black, H. Korn, and M. Nordin, “Association between sitting and occupational lbp,” European spine journal, vol. 16, no. 2, pp. 283–298, 2007.
  37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  38. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  39. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  40. S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, 2016.
  41. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  42. A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th international conference on pattern recognition, pp. 2366–2369, IEEE, 2010.
  43. Y. Ye, J. Shan, L. Bruzzone, and L. Shen, “Robust registration of multimodal remote sensing images based on structural similarity,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 5, pp. 2941–2958, 2017.
  44. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 586–595, 2018.
  45. Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection backdoor: A natural backdoor attack on deep neural networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, pp. 182–199, Springer, 2020.
  46. J. Guo, A. Li, and C. Liu, “Aeva: Black-box backdoor detection using adversarial extreme value analysis,” arXiv preprint arXiv:2110.14880, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Huasong Zhou (1 paper)
  2. Xiaowei Xu (78 papers)
  3. Xiaodong Wang (228 papers)
  4. Leon Bevan Bullock (1 paper)