Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FakeTracer: Catching Face-swap DeepFakes via Implanting Traces in Training (2307.14593v2)

Published 27 Jul 2023 in cs.CV

Abstract: Face-swap DeepFake is an emerging AI-based face forgery technique that can replace the original face in a video with a generated face of the target identity while retaining consistent facial attributes such as expression and orientation. Due to the high privacy of faces, the misuse of this technique can raise severe social concerns, drawing tremendous attention to defend against DeepFakes recently. In this paper, we describe a new proactive defense method called FakeTracer to expose face-swap DeepFakes via implanting traces in training. Compared to general face-synthesis DeepFake, the face-swap DeepFake is more complex as it involves identity change, is subjected to the encoding-decoding process, and is trained unsupervised, increasing the difficulty of implanting traces into the training phase. To effectively defend against face-swap DeepFake, we design two types of traces, sustainable trace (STrace) and erasable trace (ETrace), to be added to training faces. During the training, these manipulated faces affect the learning of the face-swap DeepFake model, enabling it to generate faces that only contain sustainable traces. In light of these two traces, our method can effectively expose DeepFakes by identifying them. Extensive experiments corroborate the efficacy of our method on defending against face-swap DeepFake.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems (NeurIPS), vol. 27, 2014.
  2. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” stat, vol. 1050, p. 1, 2014.
  3. T. T. Nguyen, Q. V. H. Nguyen, C. M. Nguyen, D. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep learning for deepfakes creation and detection: A survey,” arXiv preprint arXiv:1909.11573, 2019.
  4. K. Liu, I. Perov, D. Gao, N. Chervoniy, W. Zhou, and W. Zhang, “Deepfacelab: integrated, flexible and extensible face-swapping framework,” Pattern Recognition, p. 109628, 2023.
  5. A. Gupta, R. Mukhopadhyay, S. Balachandra, F. F. Khan, V. P. Namboodiri, and C. Jawahar, “Towards generating ultra-high resolution talking-face videos with lip synchronization,” in IEEE Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5209–5218.
  6. Z. Akhtar, “Deepfakes generation and detection: A short survey,” Journal of Imaging, vol. 9, no. 1, p. 18, 2023.
  7. K. A. Pantserev, “The malicious use of ai-based deepfake technology as the new threat to psychological security and political stability,” Cyber defence in the age of AI, smart societies and augmented humanity, pp. 37–55, 2020.
  8. H. Zhao, W. Zhou, D. Chen, T. Wei, W. Zhang, and N. Yu, “Multi-attentional deepfake detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2185–2194.
  9. L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, and B. Guo, “Face x-ray for more general face forgery detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 5001–5010.
  10. Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European Conference on Computer Vision (ECCV).   Springer, 2020, pp. 86–103.
  11. Y. Luo, Y. Zhang, J. Yan, and W. Liu, “Generalizing face forgery detection with high-frequency features,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 16 317–16 326.
  12. J. Li, H. Xie, J. Li, Z. Wang, and Y. Zhang, “Frequency-aware discriminative feature learning supervised by single-center loss for face forgery detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 6458–6467.
  13. T. Qiao, R. Shi, X. Luo, M. Xu, N. Zheng, and Y. Wu, “Statistical model-based detector via texture weight map: Application in re-sampling authentication,” IEEE Transactions on Multimedia (TMM), vol. 21, no. 5, pp. 1077–1092, 2018.
  14. B. Chen, W. Tan, G. Coatrieux, Y. Zheng, and Y.-Q. Shi, “A serial image copy-move forgery localization scheme with source/target distinguishment,” IEEE Transactions on Multimedia (TMM), vol. 23, pp. 3506–3517, 2020.
  15. V. Asnani, X. Yin, T. Hassner, and X. Liu, “Reverse engineering of generative models: Inferring model hyperparameters from generated images,” arXiv preprint arXiv:2106.07873, 2021.
  16. Q. Gu, S. Chen, T. Yao, Y. Chen, S. Ding, and R. Yi, “Exploiting fine-grained face forgery clues via progressive enhancement learning,” in AAAI Conference on Artificial Intelligence (AAAI), vol. 36, no. 1, 2022, pp. 735–743.
  17. J. Wu, B. Zhang, Z. Li, G. Pang, Z. Teng, and J. Fan, “Interactive two-stream network across modalities for deepfake detection,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023.
  18. Z. Guo, G. Yang, D. Zhang, and M. Xia, “Rethinking gradient operator for exposing ai-enabled face forgeries,” Expert Systems with Applications, vol. 215, p. 119361, 2023.
  19. Y. Yang, C. Liang, H. He, X. Cao, and N. Z. Gong, “Faceguard: Proactive deepfake detection,” arXiv preprint arXiv:2109.05673, 2021.
  20. R. Wang, F. Juefei-Xu, M. Luo, Y. Liu, and L. Wang, “Faketagger: Robust safeguards against deepfake dissemination via provenance tracking,” in ACM International Conference on Multimedia (ACMMM), 2021, pp. 3546–3555.
  21. N. Yu, V. Skripniuk, S. Abdelnabi, and M. Fritz, “Artificial fingerprinting for generative models: Rooting deepfake attribution in training data,” in IEEE International Conference on Computer Vision (ICCV), 2021, pp. 14 448–14 457.
  22. V. Asnani, X. Yin, T. Hassner, S. Liu, and X. Liu, “Proactive image manipulation detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 15 386–15 395.
  23. C.-Y. Yeh, H.-W. Chen, S.-L. Tsai, and S.-D. Wang, “Disrupting image-translation-based deepfake algorithms with adversarial attacks,” in IEEE Winter Conference on Applications of Computer Vision Workshops (WCACV), 2020, pp. 53–62.
  24. N. Ruiz, S. A. Bargal, and S. Sclaroff, “Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems,” in ECCV Workshops, 2020, pp. 236–251.
  25. E. Segalis and E. Galili, “Ogan: Disrupting deepfakes with an adversarial attack that survives training,” arXiv preprint arXiv:2006.12247, 2020.
  26. M. Xue, C. Yuan, C. He, Y. Wu, Z. Wu, Y. Zhang, Z. Liu, and W. Liu, “Use the spear as a shield: An adversarial example based privacy-preserving technique against membership inference attacks,” IEEE Transactions on Emerging Topics in Computing(TETC), vol. 11, no. 1, pp. 153–169, 2022.
  27. Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-df: A large-scale challenging dataset for deepfake forensics,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3207–3216.
  28. P. Sun, Y. Li, H. Qi, and S. Lyu, “Faketracer: Exposing deepfakes with training data contamination,” in IEEE International Conference on Image Processing (ICIP).   IEEE, 2022, pp. 1161–1165.
  29. M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30, 2017.
  30. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” in International Conference on Learning Representations (ICLR), 2018.
  31. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4401–4410.
  32. F. Peng, L.-P. Yin, L.-B. Zhang, and M. Long, “Cgr-gan: Cg facial image regeneration for antiforensics based on generative adversarial network,” IEEE Transactions on Multimedia (TMM), vol. 22, no. 10, pp. 2511–2525, 2019.
  33. T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8110–8119.
  34. S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano, and H. Li, “Protecting world leaders against deep fakes.” in CVPR workshops, vol. 1, 2019.
  35. H. Li, B. Li, S. Tan, and J. Huang, “Identification of deep network generated images using disparities in color components,” Signal Processing, vol. 174, p. 107616, 2020.
  36. D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “Mesonet: a compact facial video forgery detection network,” in 2018 IEEE international workshop on information forensics and security (WIFS).   IEEE, 2018, pp. 1–7.
  37. C. Wang and W. Deng, “Representative forgery mining for fake face detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14 923–14 932.
  38. L. Nataraj, T. M. Mohammed, B. Manjunath, S. Chandrasekaran, A. Flenner, J. H. Bappy, and A. K. Roy-Chowdhury, “Detecting gan generated fake images using co-occurrence matrices,” Electronic Imaging, vol. 2019, no. 5, pp. 532–1, 2019.
  39. J. Hernandez-Ortega, R. Tolosana, J. Fierrez, and A. Morales, “Deepfakeson-phys: Deepfakes detection based on heart rate estimation,” arXiv preprint arXiv:2010.00400, 2020.
  40. U. A. Ciftci, I. Demir, and L. Yin, “Fakecatcher: Detection of synthetic portrait videos using biological signals,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
  41. T. Zhao, X. Xu, M. Xu, H. Ding, Y. Xiong, and W. Xia, “Learning self-consistency for deepfake detection,” in IEEE International Conference on Computer Vision (ICCV), 2021, pp. 15 023–15 033.
  42. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 586–595.
  43. S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros, “Cnn-generated images are surprisingly easy to spot… for now,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8695–8704.

Summary

We haven't generated a summary for this paper yet.