Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Real is not True: Backdoor Attacks Against Deepfake Detection (2403.06610v1)

Published 11 Mar 2024 in cs.CR

Abstract: The proliferation of malicious deepfake applications has ignited substantial public apprehension, casting a shadow of doubt upon the integrity of digital media. Despite the development of proficient deepfake detection mechanisms, they persistently demonstrate pronounced vulnerability to an array of attacks. It is noteworthy that the pre-existing repertoire of attacks predominantly comprises adversarial example attack, predominantly manifesting during the testing phase. In the present study, we introduce a pioneering paradigm denominated as Bad-Deepfake, which represents a novel foray into the realm of backdoor attacks levied against deepfake detectors. Our approach hinges upon the strategic manipulation of a delimited subset of the training data, enabling us to wield disproportionate influence over the operational characteristics of a trained model. This manipulation leverages inherent frailties inherent to deepfake detectors, affording us the capacity to engineer triggers and judiciously select the most efficacious samples for the construction of the poisoned set. Through the synergistic amalgamation of these sophisticated techniques, we achieve an remarkable performance-a 100% attack success rate (ASR) against extensively employed deepfake detectors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Z. Li, M. Usman, R. Tao, P. Xia, C. Wang, H. Chen, and B. Li, “A systematic survey of regularization and normalization in gans,” ACM Computing Surveys, vol. 55, no. 11, pp. 1–37, 2023.
  2. Z. Li, P. Xia, R. Tao, H. Niu, and B. Li, “A new perspective on stabilizing gans training: Direct adversarial training,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 7, no. 1, pp. 178–189, 2022.
  3. Z. Li, C. Wang, H. Zheng, J. Zhang, and B. Li, “Fakeclr: Exploring contrastive learning for solving latent discontinuity in data-efficient gans,” in European Conference on Computer Vision.   Springer, 2022, pp. 598–615.
  4. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “Deepfakes and beyond: A survey of face manipulation and fake detection,” Information Fusion, vol. 64, pp. 131–148, 2020.
  5. N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman, “Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 500–22 510.
  6. S. R. Ahmed, E. Sonuç, M. R. Ahmed, and A. D. Duru, “Analysis survey on deepfake detection and recognition with convolutional neural networks,” in 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA).   IEEE, 2022, pp. 1–7.
  7. H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu, “Multi-attentional deepfake detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 2185–2194.
  8. D. Li, W. Wang, H. Fan, and J. Dong, “Exploring adversarial fake images on face manifold,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5789–5798.
  9. W. Wang, Z. Zhao, N. Sebe, and B. Lepri, “Turn fake into real: Adversarial head turn attacks against deepfake detection,” arXiv preprint arXiv:2309.01104, 2023.
  10. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  11. H. Sun, Z. Li, P. Xia, H. Li, B. Xia, Y. Wu, and B. Li, “Efficient backdoor attacks for deep neural networks in real-world scenarios,” arXiv preprint arXiv:2306.08386, 2023.
  12. P. Xia, H. Niu, Z. Li, and B. Li, “Enhancing backdoor attacks with multi-level mmd regularization,” IEEE Transactions on Dependable and Secure Computing, vol. 20, no. 2, pp. 1675–1686, 2022.
  13. P. Xia, Z. Li, W. Zhang, and B. Li, “Data-efficient backdoor attacks,” arXiv preprint arXiv:2204.12281, 2022.
  14. Z. Li, H. Sun, P. Xia, B. Xia, X. Rui, W. Zhang, and B. Li, “A proxy-free strategy for practically improving the poisoning efficiency in backdoor attacks,” arXiv preprint arXiv:2306.08313, 2023.
  15. Z. Li, P. Xia, H. Sun, Y. Zeng, W. Zhang, and B. Li, “Explore the effect of data selection on poison efficiency in backdoor attacks,” arXiv preprint arXiv:2310.09744, 2023.
  16. P. Korshunov and S. Marcel, “Deepfakes: a new threat to face recognition? assessment and detection,” arXiv preprint arXiv:1812.08685, 2018.
  17. E. Gonzalez-Sosa, J. Fierrez, R. Vera-Rodriguez, and F. Alonso-Fernandez, “Facial soft biometrics for recognition in the wild: Recent works, annotation, and cots evaluation,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 8, pp. 2001–2014, 2018.
  18. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401–4410.
  19. F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  20. Z. Xu, H. Zhou, Z. Hong, Z. Liu, J. Liu, Z. Guo, J. Han, J. Liu, E. Ding, and J. Wang, “Styleswap: Style-based generator empowers robust face swapping,” in European Conference on Computer Vision.   Springer, 2022, pp. 661–677.
  21. Z. Li, P. Xia, X. Rui, and B. Li, “Exploring the effect of high-frequency components in gans training,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 19, no. 5, pp. 1–22, 2023.
  22. Y. Wu, Z. Li, C. Wang, H. Zheng, S. Zhao, B. Li, and D. Ta, “Domain re-modulation for few-shot generative domain adaptation,” arXiv preprint arXiv:2302.02550, 2023.
  23. A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1–11.
  24. Y. Li and S. Lyu, “Exposing deepfake videos by detecting face warping artifacts,” arXiv preprint arXiv:1811.00656, 2018.
  25. L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, and B. Guo, “Face x-ray for more general face forgery detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 5001–5010.
  26. Y. Jeong, D. Kim, Y. Ro, and J. Choi, “Frepgan: robust deepfake detection using frequency-level perturbations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 1, 2022, pp. 1060–1068.
  27. S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros, “Cnn-generated images are surprisingly easy to spot… for now,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8695–8704.
  28. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
  29. A. Gandhi and S. Jain, “Adversarial perturbations fool deepfake detectors,” in 2020 international joint conference on neural networks (IJCNN).   IEEE, 2020, pp. 1–8.
  30. S. Jia, C. Ma, T. Yao, B. Yin, S. Ding, and X. Yang, “Exploring frequency adversarial attacks for face forgery detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 4103–4112.
  31. K. Chandrasegaran, N.-T. Tran, A. Binder, and N.-M. Cheung, “Discovering transferable forensic features for cnn-generated images detection,” in European Conference on Computer Vision.   Springer, 2022, pp. 671–689.
  32. J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2387–2395.
  33. L. Li, J. Bao, H. Yang, D. Chen, and F. Wen, “Faceshifter: Towards high fidelity and occlusion aware face swapping,” arXiv preprint arXiv:1912.13457, 2019.
  34. J. Thies, M. Zollhöfer, and M. Nießner, “Deferred neural rendering: Image synthesis using neural textures,” Acm Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–12, 2019.
  35. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com