Papers
Topics
Authors
Recent
2000 character limit reached

AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on Deep Face Restoration (2403.06430v2)

Published 11 Mar 2024 in cs.CV

Abstract: Deep learning-based face restoration models, increasingly prevalent in smart devices, have become targets for sophisticated backdoor attacks. These attacks, through subtle trigger injection into input face images, can lead to unexpected restoration outcomes. Unlike conventional methods focused on classification tasks, our approach introduces a unique degradation objective tailored for attacking restoration models. Moreover, we propose the Adaptive Selective Frequency Injection Backdoor Attack (AS-FIBA) framework, employing a neural network for input-specific trigger generation in the frequency domain, seamlessly blending triggers with benign images. This results in imperceptible yet effective attacks, guiding restoration predictions towards subtly degraded outputs rather than conspicuous targets. Extensive experiments demonstrate the efficacy of the degradation objective on state-of-the-art face restoration models. Additionally, it is notable that AS-FIBA can insert effective backdoors that are more imperceptible than existing backdoor attack methods, including WaNet, ISSBA, and FIBA.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. T. Yang, P. Ren, X. Xie, and L. Zhang, “Gan prior embedded network for blind face restoration in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 672–681.
  2. F. Zhu, J. Zhu, W. Chu, X. Zhang, X. Ji, C. Wang, and Y. Tai, “Blind face restoration via integrating face shape and generative priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7662–7671.
  3. Y. Zhao, Y.-C. Su, C.-T. Chu, Y. Li, M. Renn, Y. Zhu, C. Chen, and X. Jia, “Rethinking deep face restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7652–7661.
  4. Z. Wang, J. Zhang, R. Chen, W. Wang, and P. Luo, “Restoreformer: High-quality blind face restoration from undegraded key-value pairs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 512–17 521.
  5. H. Wang, Z. Teng, C. Wu, and S. Coleman, “Facial Landmarks and Generative Priors Guided Blind Face Restoration,” in 2022 IEEE 20th International Conference on Industrial Informatics (INDIN).   IEEE, 2022, pp. 101–106.
  6. P. Zhang, K. Zhang, W. Luo, C. Li, and G. Wang, “Blind face restoration: Benchmark datasets and a baseline model,” Neurocomputing, vol. 574, p. 127271, 2024.
  7. X. Chen, J. Tan, T. Wang, K. Zhang, W. Luo, and X. Cao, “Towards real-world blind face restoration with generative diffusion prior,” arXiv preprint arXiv:2312.15736, 2023.
  8. J. Tan, X. Chen, T. Wang, K. Zhang, W. Luo, and X. Cao, “Blind face restoration for under-display camera via dictionary guided transformer,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  9. X. Tu, J. Zhao, Q. Liu, W. Ai, G. Guo, Z. Li, W. Liu, and J. Feng, “Joint face image restoration and frontalization for recognition,” IEEE Transactions on circuits and systems for video technology, vol. 32, no. 3, pp. 1285–1298, 2021.
  10. Y. Hu, Y. Wang, and J. Zhang, “Dear-gan: Degradation-aware face restoration with gan prior,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  11. T. Wang, K. Zhang, X. Chen, W. Luo, J. Deng, T. Lu, X. Cao, W. Liu, H. Li, and S. Zafeiriou, “A survey of deep face restoration: Denoise, super-resolution, deblur, artifact removal,” arXiv preprint arXiv:2211.02831, 2022.
  12. K. Zhang, D. Li, W. Luo, J. Liu, J. Deng, W. Liu, and S. Zafeiriou, “Edface-celeb-1 m: Benchmarking face hallucination with a million-scale dataset,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3968–3978, 2022.
  13. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401–4410.
  14. K. Ali, A. N. Quershi, A. A. B. Arifin, M. S. Bhatti, A. Sohail, and R. Hassan, “Deep image restoration model: A defense method against adversarial attacks.” Computers, Materials & Continua, vol. 71, no. 2, 2022.
  15. C. Kang, Y. Dong, Z. Wang, S. Ruan, H. Su, and X. Wei, “Diffender: Diffusion-based adversarial defense against patch attacks in the physical world,” arXiv preprint arXiv:2306.09124, 2023.
  16. A. ArjomandBigdeli, M. Amirmazlaghani, and M. Khalooei, “Defense against adversarial attacks using dragan,” in 2020 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS).   IEEE, 2020, pp. 1–5.
  17. Y. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent backdoor attacks on deep neural networks,” in Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, 2019, pp. 2041–2055.
  18. T. Wang, Y. Yao, F. Xu, S. An, H. Tong, and T. Wang, “An invisible black-box backdoor attack through frequency domain,” in European Conference on Computer Vision.   Springer, 2022, pp. 396–413.
  19. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  20. A. Nguyen and A. Tran, “Wanet–imperceptible warping-based backdoor attack,” arXiv preprint arXiv:2102.10369, 2021.
  21. Y. Feng, B. Ma, J. Zhang, S. Zhao, Y. Xia, and D. Tao, “Fiba: Frequency-injection based backdoor attack in medical image analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 876–20 885.
  22. M. Barni, K. Kallas, and B. Tondi, “A new backdoor attack in cnns by training set corruption without label poisoning,” in 2019 IEEE International Conference on Image Processing (ICIP).   IEEE, 2019, pp. 101–105.
  23. Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection backdoor: A natural backdoor attack on deep neural networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16.   Springer, 2020, pp. 182–199.
  24. Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible backdoor attack with sample-specific triggers,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16 463–16 472.
  25. T. Wang, Y. Yao, F. Xu, S. An, H. Tong, and T. Wang, “Backdoor attack through frequency domain,” arXiv preprint arXiv:2111.10991, 2021.
  26. K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks,” in International symposium on research in attacks, intrusions, and defenses.   Springer, 2018, pp. 273–294.
  27. K. Xu, S. Liu, P.-Y. Chen, P. Zhao, and X. Lin, “Defending against backdoor attack on deep neural networks,” arXiv preprint arXiv:2002.12162, 2020.
  28. B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 707–723.
  29. E. Chou, F. Tramer, and G. Pellegrino, “Sentinet: Detecting localized universal attacks against deep learning systems,” in 2020 IEEE Security and Privacy Workshops (SPW).   IEEE, 2020, pp. 48–54.
  30. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  31. Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, “Strip: A defence against trojan attacks on deep neural networks,” in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 113–125.
  32. L. Chen, J. Pan, R. Hu, Z. Han, C. Liang, and Y. Wu, “Modeling and optimizing of the multi-layer nearest neighbor network for face image super-resolution,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 12, pp. 4513–4525, 2019.
  33. B. Jiang, Y. Lu, B. Zhang, and G. Lu, “Few-shot learning for image denoising,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  34. K. Zhang, T. Wang, W. Luo, W. Ren, B. Stenger, W. Liu, H. Li, and M.-H. Yang, “Mc-blur: A comprehensive benchmark for image deblurring,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  35. L. Yang, S. Wang, S. Ma, W. Gao, C. Liu, P. Wang, and P. Ren, “Hifacegan: Face renovation via collaborative suppression and replenishment,” in Proceedings of the 28th ACM international conference on multimedia, 2020, pp. 1551–1560.
  36. X. Wang, Y. Li, H. Zhang, and Y. Shan, “Towards real-world blind face restoration with generative facial prior,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 9168–9178.
  37. S. Zhou, K. Chan, C. Li, and C. C. Loy, “Towards robust blind face restoration with codebook lookup transformer,” Advances in Neural Information Processing Systems, vol. 35, pp. 30 599–30 611, 2022.
  38. Y. Gu, X. Wang, L. Xie, C. Dong, G. Li, Y. Shan, and M.-M. Cheng, “Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder,” in European Conference on Computer Vision.   Springer, 2022, pp. 126–143.
  39. Y. Cui, Y. Tao, Z. Bing, W. Ren, X. Gao, X. Cao, K. Huang, and A. Knoll, “Selective frequency network for image restoration,” in The Eleventh International Conference on Learning Representations, 2022.
  40. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  41. K. Doan, Y. Lao, W. Zhao, and P. Li, “Lira: Learnable, imperceptible and robust backdoor attacks,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 11 966–11 976.
  42. T. Morkel, J. H. Eloff, and M. S. Olivier, “An overview of image steganography.” in ISSA, vol. 1, no. 2, 2005, pp. 1–11.
  43. N. Subramanian, O. Elharrouss, S. Al-Maadeed, and A. Bouridane, “Image steganography: A review of the recent advances,” IEEE access, vol. 9, pp. 23 409–23 423, 2021.
  44. S.-P. Lu, R. Wang, T. Zhong, and P. L. Rosin, “Large-capacity image steganography based on invertible neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10 816–10 825.
  45. V. Kumar, S. Sharma, C. Kumar, and A. K. Sahu, “Latest trends in deep learning techniques for image steganography,” International Journal of Digital Crime and Forensics (IJDCF), vol. 15, no. 1, pp. 1–14, 2023.
  46. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  47. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal processing letters, vol. 20, no. 3, pp. 209–212, 2012.
  48. F.-Z. Ou, X. Chen, R. Zhang, Y. Huang, S. Li, J. Li, Y. Li, L. Cao, and Y.-G. Wang, “Sdd-fiqa: unsupervised face image quality assessment with similarity distribution distance,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 7670–7679.
  49. K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks,” CoRR, vol. abs/1805.12185, 2018. [Online]. Available: http://arxiv.org/abs/1805.12185

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.