Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preserving Fairness Generalization in Deepfake Detection (2402.17229v1)

Published 27 Feb 2024 in cs.CV, cs.LG, and cs.CY

Abstract: Although effective deepfake detection models have been developed in recent years, recent studies have revealed that these models can result in unfair performance disparities among demographic groups, such as race and gender. This can lead to particular groups facing unfair targeting or exclusion from detection, potentially allowing misclassified deepfakes to manipulate public opinion and undermine trust in the model. The existing method for addressing this problem is providing a fair loss function. It shows good fairness performance for intra-domain evaluation but does not maintain fairness for cross-domain testing. This highlights the significance of fairness generalization in the fight against deepfakes. In this work, we propose the first method to address the fairness generalization problem in deepfake detection by simultaneously considering features, loss, and optimization aspects. Our method employs disentanglement learning to extract demographic and domain-agnostic forgery features, fusing them to encourage fair learning across a flattened loss landscape. Extensive experiments on prominent deepfake datasets demonstrate our method's effectiveness, surpassing state-of-the-art approaches in preserving fairness during cross-domain deepfake detection. The code is available at https://github.com/Purdue-M2/Fairness-Generalization

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. A. Vahdat and J. Kautz, “Nvae: A deep hierarchical variational autoencoder,” Advances in neural information processing systems, vol. 33, pp. 19667–19679, 2020.
  2. T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110–8119, 2020.
  3. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
  4. X. Wang, H. Guo, S. Hu, M.-C. Chang, and S. Lyu, “Gan-generated faces detection: A survey and new perspectives,” ECAI, 2023.
  5. M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik, “Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward,” Applied intelligence, vol. 53, no. 4, pp. 3974–4026, 2023.
  6. Y. Ju, S. Hu, S. Jia, G. H. Chen, and S. Lyu, “Improving fairness in deepfake detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4655–4665, 2024.
  7. F. Marra, C. Saltori, G. Boato, and L. Verdoliva, “Incremental learning for the detection and classification of gan-generated images,” in 2019 IEEE international workshop on information forensics and security (WIFS), pp. 1–6, IEEE, 2019.
  8. M. Goebel, L. Nataraj, T. Nanjundaswamy, T. M. Mohammed, S. Chandrasekaran, and B. Manjunath, “Detection, attribution and localization of gan generated images,” arXiv preprint arXiv:2007.10466, 2020.
  9. S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros, “Cnn-generated images are surprisingly easy to spot… for now,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8695–8704, 2020.
  10. Z. Liu, X. Qi, and P. H. Torr, “Global texture enhancement for fake face detection in the wild,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8060–8069, 2020.
  11. N. Hulzebosch, S. Ibrahimi, and M. Worring, “Detecting cnn-generated facial images in real-world scenarios,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 642–643, 2020.
  12. H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, “Robust attentive deep neural network for detecting gan-generated faces,” IEEE Access, vol. 10, pp. 32574–32583, 2022.
  13. W. Pu, J. Hu, X. Wang, Y. Li, S. Hu, B. Zhu, R. Song, Q. Song, X. Wu, and S. Lyu, “Learning a deep dual-level network for robust deepfake detection,” Pattern Recognition, vol. 130, p. 108832, 2022.
  14. J. Hu, S. Wang, and X. Li, “Improving the generalization ability of deepfake detection via disentangled representation learning,” in 2021 IEEE International Conference on Image Processing (ICIP), pp. 3577–3581, IEEE, 2021.
  15. K.-Y. Zhang, T. Yao, J. Zhang, Y. Tai, S. Ding, J. Li, F. Huang, H. Song, and L. Ma, “Face anti-spoofing via disentangled representation learning,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX 16, pp. 641–657, Springer, 2020.
  16. J. Liang, H. Shi, and W. Deng, “Exploring disentangled content information for face forgery detection,” in European Conference on Computer Vision, pp. 128–145, Springer, 2022.
  17. Z. Yan, Y. Zhang, Y. Fan, and B. Wu, “Ucf: Uncovering common features for generalizable deepfake detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 22412–22423, October 2023.
  18. P. Zheng, H. Chen, S. Hu, B. Zhu, J. Hu, C.-S. Lin, X. Wu, S. Lyu, G. Huang, and X. Wang, “Few-shot learning for misinformation detection based on contrastive models,” Electronics, vol. 13, no. 4, p. 799, 2024.
  19. T. Chen, S. Yang, S. Hu, Z. Fang, Y. Fu, X. Wu, and X. Wang, “Masked conditional diffusion model for enhancing deepfake detection,” arXiv preprint arXiv:2402.00541, 2024.
  20. L. Lin, N. Gupta, Y. Zhang, H. Ren, C.-H. Liu, F. Ding, X. Wang, X. Li, L. Verdoliva, and S. Hu, “Detecting multimedia generated by large ai models: A survey,” arXiv preprint arXiv:2402.00045, 2024.
  21. B. Fan, S. Hu, and F. Ding, “Synthesizing black-box anti-forensics deepfakes with high visual quality,” ICASSP, 2024.
  22. L. Zhang, H. Chen, S. Hu, B. Zhu, X. Wu, J. Hu, and X. Wang, “X-transfer: A transfer learning-based framework for robust gan-generated fake image detection,” arXiv preprint arXiv:2310.04639, 2023.
  23. S. Yang, S. Hu, B. Zhu, Y. Fu, S. Lyu, X. Wu, and X. Wang, “Improving cross-dataset deepfake detection with deep information decomposition,” arXiv preprint arXiv:2310.00359, 2023.
  24. B. Fan, Z. Jiang, S. Hu, and F. Ding, “Attacking identity semantics in deepfakes via deep feature fusion,” in 2023 IEEE 6th International Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 114–119, IEEE, 2023.
  25. H. Chen, P. Zheng, X. Wang, S. Hu, B. Zhu, J. Hu, X. Wu, and S. Lyu, “Harnessing the power of text-image contrastive models for automatic detection of online misinformation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 923–932, 2023.
  26. L. Trinh and Y. Liu, “An examination of fairness of ai models for deepfake detection,” IJCAI, 2021.
  27. Y. Xu, P. Terhörst, K. Raja, and M. Pedersen, “A comprehensive analysis of ai biases in deepfake detection with massively annotated databases,” arXiv preprint arXiv:2208.05845, 2022.
  28. K. Wiggers, “Deepfake detectors and datasets exhibit racial and gender bias, usc study shows,” in VentureBeat, https://tinyurl.com/ms8zbu6f, 2021.
  29. A. V. Nadimpalli and A. Rattani, “Gbdf: gender balanced deepfake dataset towards fair deepfake detection,” arXiv preprint arXiv:2207.10246, 2022.
  30. M. Masood, M. Nawaz, K. M. Malik, A. Javed, A. Irtaza, and H. Malik, “Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward,” Applied Intelligence, pp. 1–53, 2022.
  31. C. Hazirbas, J. Bitton, B. Dolhansky, J. Pan, A. Gordo, and C. C. Ferrer, “Towards measuring fairness in ai: the casual conversations dataset,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 3, pp. 324–332, 2021.
  32. X. Wang, H. Chen, S. Tang, Z. Wu, and W. Zhu, “Disentangled representation learning,” arXiv preprint arXiv:2211.11695, 2022.
  33. M. Pu, M. Y. Kuan, N. T. Lim, C. Y. Chong, and M. K. Lim, “Fairness evaluation in deepfake detection models using metamorphic testing,” arXiv preprint arXiv:2203.06825, 2022.
  34. A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 1–11, 2019.
  35. Google and Jigsaw, “Deepfakes dataset by google & jigsaw,” in https://ai.googleblog.com/2019/09/contributing-data-to-deepfakedetection.html, 2019.
  36. H. Wang, L. He, R. Gao, and F. P. Calmon, “Aleatoric and epistemic discrimination in classification,” ICML, 2023.
  37. J. Wang, X. E. Wang, and Y. Liu, “Understanding instance-level impact of fairness constraints,” in International Conference on Machine Learning, pp. 23114–23130, PMLR, 2022.
  38. F. Locatello, G. Abbati, T. Rainforth, S. Bauer, B. Schölkopf, and O. Bachem, “On the fairness of disentangled representations,” Advances in neural information processing systems, vol. 32, 2019.
  39. P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, “Sharpness-aware minimization for efficiently improving generalization,” in International Conference on Learning Representations, 2020.
  40. “Deepfakes,” in https://github.com/deepfakes/faceswap, 2017.
  41. J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2387–2395, 2016.
  42. M. Kowalski, “Faceswap,” in https://github.com/MarekKowalski/FaceSwap/, 2018.
  43. J. Thies, M. Zollhöfer, and M. Nießner, “Deferred neural rendering: Image synthesis using neural textures,” Acm Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–12, 2019.
  44. L. Li, J. Bao, H. Yang, D. Chen, and F. Wen, “Faceshifter: Towards high fidelity and occlusion aware face swapping,” arXiv preprint arXiv:1912.13457, pp. 2, 5, 2019.
  45. S. Mathews, S. Trivedi, A. House, S. Povolny, and C. Fralick, “An explainable deepfake detection framework on a novel unconstrained dataset,” Complex & Intelligent Systems, pp. 1–13, 2023.
  46. Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, “Symmetric cross entropy for robust learning with noisy labels,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 322–330, 2019.
  47. K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma, “Learning imbalanced datasets with label-distribution-aware margin loss,” Advances in neural information processing systems, vol. 32, 2019.
  48. A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  49. X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proceedings of the IEEE international conference on computer vision, pp. 1501–1510, 2017.
  50. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226, 2012.
  51. M. Hardt, E. Price, and N. Srebro, “Equality of opportunity in supervised learning,” Advances in neural information processing systems, vol. 29, 2016.
  52. S. Hu, X. Wang, and S. Lyu, “Rank-based decomposable losses in machine learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  53. S. Hu and G. H. Chen, “Distributionally robust survival analysis: A novel fairness loss without demographics,” in Machine Learning for Health, pp. 62–87, PMLR, 2022.
  54. S. Hu, Y. Ying, X. Wang, and S. Lyu, “Sum of ranked range loss for supervised learning,” The Journal of Machine Learning Research, vol. 23, no. 1, pp. 4826–4869, 2022.
  55. S. Hu, L. Ke, X. Wang, and S. Lyu, “Tkml-ap: Adversarial attacks to top-k multi-label learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7649–7657, 2021.
  56. S. Hu, Y. Ying, S. Lyu, et al., “Learning by minimizing the sum of ranked range,” Advances in Neural Information Processing Systems, vol. 33, pp. 21013–21023, 2020.
  57. S. Hu, Z. Yang, X. Wang, Y. Ying, and S. Lyu, “Outlier robust adversarial training,” ACML, 2023.
  58. R. Williamson and A. Menon, “Fairness risk measures,” in International Conference on Machine Learning, pp. 6786–6797, PMLR, 2019.
  59. D. Levy, Y. Carmon, J. C. Duchi, and A. Sidford, “Large-scale methods for distributionally robust optimization,” Advances in Neural Information Processing Systems, vol. 33, pp. 8847–8860, 2020.
  60. “Deepfake detection challenge.” https://www.kaggle.com/c/deepfake-detection-challenge. Accessed: 2021-04-24.
  61. Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-df: A new dataset for deepfake forensics,” in CVPR, pp. 6,7, 2020.
  62. Y. Luo, Y. Zhang, J. Yan, and W. Liu, “Generalizing face forgery detection with high-frequency features,” in CVPR, 2021.
  63. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017.
  64. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  65. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, pp. 6105–6114, PMLR, 2019.
  66. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017.
  67. S. Hu, Y. Li, and S. Lyu, “Exposing gan-generated faces using inconsistent corneal specular highlights,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2500–2504, IEEE, 2021.
  68. H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, “Eyes tell all: Irregular pupil shapes reveal gan-generated faces,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2904–2908, IEEE, 2022.
  69. H. Guo, S. Hu, X. Wang, M.-C. Chang, and S. Lyu, “Open-eye: An open platform to study human performance on identifying ai-synthesized faces,” in 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 224–227, IEEE, 2022.
  70. Y. Li, M.-C. Chang, and S. Lyu, “In ictu oculi: Exposing ai created fake videos by detecting eye blinking,” in 2018 IEEE International workshop on information forensics and security (WIFS), pp. 1–7, IEEE, 2018.
  71. F. Matern, C. Riess, and M. Stamminger, “Exploiting visual artifacts to expose deepfakes and face manipulations,” in 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92, IEEE, 2019.
  72. X. Yang, Y. Li, H. Qi, and S. Lyu, “Exposing gan-synthesized faces using landmark locations,” in Proceedings of the ACM workshop on information hiding and multimedia security, pp. 113–118, 2019.
  73. Y. Qian, G. Yin, L. Sheng, Z. Chen, and J. Shao, “Thinking in frequency: Face forgery detection by mining frequency-aware clues,” in European conference on computer vision, pp. 86–103, Springer, 2020.
  74. M. Khayatkhoei and A. Elgammal, “Spatial frequency bias in convolutional generative adversarial networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 7152–7159, 2022.
  75. T. Dzanic, K. Shah, and F. Witherden, “Fourier spectrum discrepancies in deep network generated images,” Advances in neural information processing systems, vol. 33, pp. 3022–3032, 2020.
  76. J. Frank, T. Eisenhofer, L. Schönherr, A. Fischer, D. Kolossa, and T. Holz, “Leveraging frequency analysis for deep fake image recognition,” in International conference on machine learning, pp. 3247–3258, PMLR, 2020.
  77. X. Zhang, S. Karaman, and S.-F. Chang, “Detecting and simulating artifacts in gan fake images,” in 2019 IEEE international workshop on information forensics and security (WIFS), pp. 1–6, IEEE, 2019.
  78. L. McInnes, J. Healy, and J. Melville, “Umap: Uniform manifold approximation and projection for dimension reduction,” arXiv preprint arXiv:1802.03426, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Li Lin (91 papers)
  2. Xinan He (6 papers)
  3. Yan Ju (10 papers)
  4. Xin Wang (1307 papers)
  5. Feng Ding (72 papers)
  6. Shu Hu (63 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.