Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ManiFPT: Defining and Analyzing Fingerprints of Generative Models (2402.10401v2)

Published 16 Feb 2024 in cs.LG and cs.CV

Abstract: Recent works have shown that generative models leave traces of their underlying generative process on the generated samples, broadly referred to as fingerprints of a generative model, and have studied their utility in detecting synthetic images from real ones. However, the extend to which these fingerprints can distinguish between various types of synthetic image and help identify the underlying generative process remain under-explored. In particular, the very definition of a fingerprint remains unclear, to our knowledge. To that end, in this work, we formalize the definition of artifact and fingerprint in generative models, propose an algorithm for computing them in practice, and finally study its effectiveness in distinguishing a large array of different generative models. We find that using our proposed definition can significantly improve the performance on the task of identifying the underlying generative process from samples (model attribution) compared to existing methods. Additionally, we study the structure of the fingerprints, and observe that it is very predictive of the effect of different design choices on the generative process.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (68)
  1. Reverse engineering of generative models: Inferring model hyperparameters from generated images. ArXiv, abs/2106.07873, 2021.
  2. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In International Conference on Learning Representations, Jan. 2023.
  3. Lawrence Cayton. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, 12(1-17):1, 2005.
  4. A Closer Look at Fourier Spectrum Discrepancies for CNN-generated Images Detection, Mar. 2021. arXiv:2103.17195 [cs, eess].
  5. Residual Flows for Invertible Generative Modeling. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
  6. ForensicTransfer: Weakly-supervised Domain Adaptation for Forgery Detection. ArXiv, 2018.
  7. The fréchet distance between multivariate normal distributions. Journal of multivariate analysis, 12(3):450–455, 1982.
  8. Watch Your Up-Convolution: CNN Based Generative Deep Neural Networks Are Failing to Reproduce Spectral Distributions. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  9. Fourier Spectrum Discrepancies in Deep Network Generated Images. In Advances in Neural Information Processing Systems, volume 33, pages 3022–3032. Curran Associates, Inc., 2020.
  10. Taming Transformers for High-Resolution Image Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12873–12883, 2021.
  11. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4):983–1049, 2016.
  12. Copyright in generative deep learning. Data & Policy, 4:e17, 2022. Publisher: Cambridge University Press.
  13. Leveraging frequency analysis for deep fake image recognition. ArXiv, abs/2003.08685, 2020.
  14. DeepfakeUCL: Deepfake Detection via Unsupervised Contrastive Learning. 2021 International Joint Conference on Neural Networks (IJCNN), 2021.
  15. Fighting Deepfakes by Detecting GAN DCT Anomalies. Journal of Imaging, 7(8):128, July 2021.
  16. Generative adversarial nets. In NIPS, 2014.
  17. DeepFake Detection by Analyzing Convolutional Traces. Apr. 2020.
  18. Improved training of wasserstein gans. In NIPS, 2017.
  19. Efficient-VDVAE: Less is more, Apr. 2022.
  20. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  21. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.
  22. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, volume 33, pages 6840–6851. Curran Associates, Inc., 2020.
  23. Deep feature consistent variational autoencoder. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1133–1141, 2017.
  24. Progressive growing of gans for improved quality, stability, and variation. ArXiv, abs/1710.10196, 2018.
  25. Alias-free generative adversarial networks. In NeurIPS, 2021.
  26. A style-based generator architecture for generative adversarial networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4396–4405, 2019.
  27. Analyzing and Improving the Image Quality of StyleGAN. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8107–8116, Seattle, WA, USA, June 2020. IEEE.
  28. DFDT: An End-to-End DeepFake Detection Framework Using Vision Transformer. Applied Sciences, 2022.
  29. Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation. In Proceedings of the 39th International Conference on Machine Learning, pages 11201–11228. PMLR, June 2022.
  30. Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
  31. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.
  32. Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
  33. Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems, 32, 2019.
  34. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  35. Macow: Masked convolutional generative flow. In NeurIPS, 2019.
  36. Least squares generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2813–2821, 2017.
  37. Detection of GAN-Generated Fake Images over Social Networks. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 384–389, Apr. 2018.
  38. Do gans leave artificial fingerprints? In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 506–511. IEEE, 2019.
  39. Incremental learning for the detection and classification of GAN-generated images. In 2019 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6, Dec. 2019.
  40. Detecting GAN-generated Imagery using Color Cues, Dec. 2018.
  41. Normalized mutual information to evaluate overlapping community finding algorithms. arXiv preprint arXiv:1110.2515, 2011.
  42. Alfred Müller. Integral Probability Metrics and Their Generating Classes of Functions. Advances in Applied Probability, 29(2):429–443, 1997.
  43. Detecting GAN generated Fake Images using Co-occurrence Matrices, Oct. 2019.
  44. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding, 223:103525, 2022.
  45. Detecting High-Quality GAN-Generated Face Images using Neural Networks, Mar. 2022.
  46. Deconvolution and checkerboard artifacts. Distill, 2016.
  47. Adversarial Latent Autoencoders. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14092–14101, Seattle, WA, USA, June 2020. IEEE.
  48. Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues, Oct. 2020. arXiv:2007.09355 [cs].
  49. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2016.
  50. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
  51. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1–11, 2019.
  52. Assessing generative models via precision and recall. Advances in neural information processing systems, 31, 2018.
  53. On the Frequency Bias of Generative Models. In Advances in Neural Information Processing Systems, volume 34, pages 18126–18136. Curran Associates, Inc., 2021.
  54. Identity-Referenced Deepfake Detection with Contrastive Learning. In Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security, IH&MMSec ’22, pages 27–32, New York, NY, USA, June 2022. Association for Computing Machinery.
  55. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, Jan. 2023.
  56. On integral probability metrics, \phi-divergences and binary classification, Oct. 2009.
  57. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, Las Vegas, NV, USA, June 2016. IEEE.
  58. NVAE: A Deep Hierarchical Variational Autoencoder. In Advances in Neural Information Processing Systems, volume 33, pages 19667–19679. Curran Associates, Inc., 2020.
  59. Score-based Generative Modeling in Latent Space. In Advances in Neural Information Processing Systems, volume 34, pages 11287–11302. Curran Associates, Inc., 2021.
  60. Laurens van der Maaten and Geoffrey E. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 2008.
  61. CNN-Generated Images Are Surprisingly Easy to Spot… for Now. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  62. VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models. In International Conference on Learning Representations, Feb. 2022.
  63. Tackling the generative learning trilemma with denoising diffusion gans. 2022.
  64. Attributing fake images to gans: Learning and analyzing gan fingerprints. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7555–7565, 2019.
  65. Barlow twins: Self-supervised learning via redundancy reduction. arXiv preprint arXiv:2103.03230, 2021.
  66. StyleSwin: Transformer-based GAN for High-resolution Image Generation. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11294–11304, New Orleans, LA, USA, June 2022. IEEE.
  67. Two-Stream Neural Networks for Tampered Face Detection, Mar. 2018. arXiv:1803.11276 [cs].
  68. Face Forgery Detection by 3D Decomposition. pages 2929–2939, 2021.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets