Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection (2402.11473v1)

Published 18 Feb 2024 in cs.CV

Abstract: The proliferation of face forgery techniques has raised significant concerns within society, thereby motivating the development of face forgery detection methods. These methods aim to distinguish forged faces from genuine ones and have proven effective in practical applications. However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack. By embedding backdoors into models and incorporating specific trigger patterns into the input, attackers can deceive detectors into producing erroneous predictions for forged faces. To achieve this goal, this paper proposes \emph{Poisoned Forgery Face} framework, which enables clean-label backdoor attacks on face forgery detectors. Our approach involves constructing a scalable trigger generator and utilizing a novel convolving process to generate translation-sensitive trigger patterns. Moreover, we employ a relative embedding method based on landmark-based regions to enhance the stealthiness of the poisoned samples. Consequently, detectors trained on our poisoned samples are embedded with backdoors. Notably, our approach surpasses SoTA backdoor baselines with a significant improvement in attack success rate (+16.39\% BD-AUC) and reduction in visibility (-12.65\% $L_\infty$). Furthermore, our attack exhibits promising performance against backdoor defenses. We anticipate that this paper will draw greater attention to the potential threats posed by backdoor attacks in face forgery detection scenarios. Our codes will be made available at \url{https://github.com/JWLiang007/PFF}

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Mesonet: a compact facial video forgery detection network. In 2018 IEEE international workshop on information forensics and security (WIFS), pp.  1–7. IEEE, 2018.
  2. Deepfake video detection through optical flow based cnn. In Proceedings of the IEEE/CVF international conference on computer vision workshops, pp.  0–0, 2019.
  3. A new backdoor attack in cnns by training set corruption without label poisoning. In 2019 IEEE International Conference on Image Processing (ICIP), pp.  101–105. IEEE, 2019.
  4. Understanding the security of deepfake detection. In International Conference on Digital Forensics and Cyber Crime, pp.  360–378. Springer, 2021.
  5. Sim2word: Explaining similarity with representative attribute words via counterfactual explanations. ACM Transactions on Multimedia Computing, Communications and Applications, 19(6):1–22, 2023.
  6. Less is more: Fewer interpretable region via submodular subset selection. arXiv preprint arXiv:2402.09164, 2024.
  7. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
  8. François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  1251–1258, 2017.
  9. Contributing data to deepfake detection research. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html, 2019.
  10. Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning, pp.  3247–3258. PMLR, 2020.
  11. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  12. Lips don’t lie: A generalisable and robust approach to face forgery detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  5039–5049, 2021.
  13. Generating transferable 3d adversarial point cloud via random perturbation factorization. In Proceedings of the AAAI Conference on Artificial Intelligence, 2023.
  14. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  15. Introduction to mathematical statistics. Pearson Education India, 2013.
  16. Adversarial texture for fooling person detectors in the physical world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  13307–13316, 2022.
  17. Scope of validity of psnr in image/video quality assessment. Electronics letters, 44(13):800–801, 2008.
  18. Deepvision: Deepfakes detection using human eye blinking pattern. IEEE Access, 8:83144–83154, 2020.
  19. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
  20. Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  5001–5010, 2020a.
  21. Anti-backdoor learning: Training clean models on poisoned data. Advances in Neural Information Processing Systems, 34:14900–14912, 2021a.
  22. Neural attention distillation: Erasing backdoor triggers from deep neural networks. arXiv preprint arXiv:2101.05930, 2021b.
  23. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE International workshop on information forensics and security (WIFS), pp.  1–7. IEEE, 2018.
  24. Celeb-df: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  3207–3216, 2020b.
  25. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  16463–16472, 2021c.
  26. Exploring inconsistent knowledge distillation for object detection with data augmentation. In Proceedings of the 31st ACM International Conference on Multimedia, pp.  768–778, 2023a.
  27. Efficient adversarial attacks for visual object tracking. In CEuropean Conference on Computer Vision, 2020.
  28. Generate more imperceptible adversarial examples for object detection. In ICML 2021 Workshop on Adversarial Machine Learning, 2021.
  29. A large-scale multiple-objective method for black-box attack against object detection. In European Conference on Computer Vision, 2022a.
  30. Imitated detectors: Stealing knowledge of black-box object detectors. In Proceedings of the 30th ACM International Conference on Multimedia, 2022b.
  31. Parallel rectangle flip attack: A query-based black-box attack against object detection. arXiv preprint arXiv:2201.08970, 2022c.
  32. Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning. arXiv preprint arXiv:2311.12075, 2023b.
  33. Perceptual-sensitive gan for generating adversarial patches. In AAAI, 2019.
  34. Spatiotemporal attacks for embodied agents. In ECCV, 2020a.
  35. Bias-based universal adversarial patch attack for automatic check-out. In ECCV, 2020b.
  36. Training robust deep neural networks via adversarial noise propagation. TIP, 2021a.
  37. X-adv: Physical adversarial object attacks against x-ray prohibited item detection. In USENIX Security Symposium, 2023a.
  38. Towards defending multiple lp-norm bounded adversarial perturbations via gated batch normalization. International Journal of Computer Vision, 2023b.
  39. Pre-trained trojan attacks for visual recognition. arXiv preprint arXiv:2312.15172, 2023c.
  40. Spatial-phase shallow learning: rethinking face forgery detection in frequency domain. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  772–781, 2021b.
  41. Improving adversarial transferability by stable diffusion. arXiv preprint arXiv:2311.11017, 2023d.
  42. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International symposium on research in attacks, intrusions, and defenses, pp.  273–294. Springer, 2018.
  43. Adversarial threats to deepfake detection: A practical perspective. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  923–932, 2021.
  44. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  1–11, 2019.
  45. Detecting and recovering sequential deepfake manipulation. In European Conference on Computer Vision, pp.  712–728. Springer, 2022.
  46. Robust sequential deepfake detection. arXiv preprint arXiv:2309.14991, 2023.
  47. Detecting deepfakes with self-blended images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  18720–18729, 2022.
  48. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp.  6105–6114. PMLR, 2019.
  49. Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771, 2019.
  50. An invisible black-box backdoor attack through frequency domain. In European Conference on Computer Vision, pp.  396–413. Springer, 2022a.
  51. Adaptive perturbation generation for multiple backdoors detection. arXiv preprint arXiv:2209.05244, 2022b.
  52. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641, 2018.
  53. Christopher Whyte. Deepfake news: Ai-enabled disinformation as a multi-level public policy challenge. Journal of cyber policy, 5(2):199–217, 2020.
  54. Backdoorbench: A comprehensive benchmark of backdoor learning. Advances in Neural Information Processing Systems, 35:10546–10559, 2022.
  55. Mmnet: Multi-collaboration and multi-supervision network for sequential deepfake detection. arXiv preprint arXiv:2307.02733, 2023.
  56. Ucf: Uncovering common features for generalizable deepfake detection. arXiv preprint arXiv:2304.13949, 2023.
  57. Multi-attentional deepfake detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  2185–2194, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiawei Liang (8 papers)
  2. Siyuan Liang (73 papers)
  3. Aishan Liu (72 papers)
  4. Xiaojun Jia (85 papers)
  5. Junhao Kuang (2 papers)
  6. Xiaochun Cao (177 papers)
Citations (15)