Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BOSC: A Backdoor-based Framework for Open Set Synthetic Image Attribution (2405.11491v1)

Published 19 May 2024 in cs.CV

Abstract: Synthetic image attribution addresses the problem of tracing back the origin of images produced by generative models. Extensive efforts have been made to explore unique representations of generative models and use them to attribute a synthetic image to the model that produced it. Most of the methods classify the models or the architectures among those in a closed set without considering the possibility that the system is fed with samples produced by unknown architectures. With the continuous progress of AI technology, new generative architectures continuously appear, thus driving the attention of researchers towards the development of tools capable of working in open-set scenarios. In this paper, we propose a framework for open set attribution of synthetic images, named BOSC (Backdoor-based Open Set Classification), that relies on the concept of backdoor attacks to design a classifier with rejection option. BOSC works by purposely injecting class-specific triggers inside a portion of the images in the training set to induce the network to establish a matching between class features and trigger features. The behavior of the trained model with respect to triggered samples is then exploited at test time to perform sample rejection using an ad-hoc score. Experiments show that the proposed method has good performance, always surpassing the state-of-the-art. Robustness against image processing is also very good. Although we designed our method for the task of synthetic image attribution, the proposed framework is a general one and can be used for other image forensic applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. A. Vahdat, and J. Kautz, “NVAE: A deep hierarchical variational autoencoder,” in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 19667–19679.
  2. T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 8110–8119.
  3. T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila, “Alias-free generative adversarial networks,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 852–863.
  4. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 6840–6851.
  5. X. Zhang, S. Karaman, and S. F. Chang, “ Detecting and simulating artifacts in gan fake images,” in Proc. IEEE international workshop on information forensics and security (WIFS), 2019, pp. 1–6.
  6. M. Goebel, L. Nataraj,, T. Nanjundaswamy,T.M. Mohammed, S. Chandrasekaran, and B.S. Manjunath, “Non-linear estimation is easy,” Electronic Imaging, vol. 33, pp. 1–11, 2021.
  7. J. Wang, B. Tondi, and M. Barni, “Classification of synthetic facial attributes by means of hybrid classification/localization patch-based analysis,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Jun. 2023, pp. 1–5.
  8. F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi, “Do GANs leave artificial fingerprints?,” in Proc. IEEE Conf. on Multimedia Information Processing and Retrieval (MIPR), 2019, pp. 506–511.
  9. N. Yu, L. Davis, and M. Fritz, “Attributing Fake Images to GANs: Learning and Analyzing GAN Fingerprints,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 7556–7566.
  10. N. Yu, V. Skripniuk, S. Abdelnabi, and M. Fritz, “Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 14448–14457.
  11. T. Yang, Z. Huang, J. Cao, L. Li, and X. Li, “Deepfake network architecture attribution,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 4, pp. 4662–4670, 2022.
  12. T. Bui, N. Yu, and J. Collomosse, “Repmix: Representation mixing for robust attribution of synthesized images,” in Proc. of the European Conf. on Comput. Vis. (ECCV), Sept. 2022, pp. 146–163.
  13. G. Chen, P. Peng, X. Wang, and Y. Tian, “Adversarial reciprocal points learning for open set recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8065–8081, 2021.
  14. H. Huang, Y. Wang, Q. Hu, and M. M. Cheng, “Class-specific semantic reconstruction for open set recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 4214–4228, 2022.
  15. Z. Xia, P. Wang, G. Dong, and H. Liu, “Adversarial kinetic prototype framework for open set recognition,” IEEE Trans. Neural Networks and Learning Systems, 2023.
  16. S. Vaze, K. Han, A. Vedaldi, and A. Zisserman, “Open-set recognition: A good closed-set classifier is all you need?,” in Int. Conf. on Learning Representations (ICLR), 2022.
  17. Z. Y. Ge, S. Demyanov, Z. Chen, and R. Garnavi, “Generative openmax for multi-class open set classification,” 2017, arXiv:1707.07418.
  18. W. Moon, J. Park, H. S. Seong, C. H. Cho, and J. P. Heo, “Difficulty-aware simulator for open set recognition,” in Proc. of the European Conf. on Comput. Vis. (ECCV), Sept. 2020, pp. 365–381.
  19. W. Guo, B. Tondi, and M. Barni, “An overview of backdoor attacks against deep neural networks and possible defences,” IEEE Open Journal of Signal Processing, 2022.
  20. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” 2017, arXiv:1708.06733.
  21. W. Scheirer, A. Rocha, A. Sapkota, and T. Boult, “Toward Open Set Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 7, pp. 1757–1772, 2013.
  22. G. Gavarini, D. Stucchi, A. Ruospo, G. Boracchi, and E. Sanchez, “ Open-set recognition: an inexpensive strategy to increase dnn reliability,” in 2022 IEEE 28th International Symposium on On-Line Testing and Robust System Design (IOLTS), 2022, pp. 1–7.
  23. C. Chow, “On optimum recognition error and reject tradeoff,” IEEE Transactions on information theory, vol. 16, no. 1, pp. 41–46, 1970.
  24. W. Liu, X. Wang, J. Owens, and Y. Li, “Energy-based out-of-distribution detection,” in Proc. of the Advances in Neural Information Processing Systems, vol. 33, pp. 21464–21475, 2020.
  25. H. M. Yang, X. Y. Zhang, F. Yin, and C. L. Liu, “Robust classification with convolutional prototype learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2018, pp. 3474–3482.
  26. G. Chen, L. Qiao, Y. Shi, P. Peng, J. Li, T. Huang, S. Pu, and Y. Tian, “Learning open set network with discriminative reciprocal points,” in Proc. of the European Conf. on Comput. Vis. (ECCV), Sept. 2020, pp. 507–522.
  27. R. Yoshihashi, W. Shao, R. Kawakami, S. You, M. Iida, and T. Naemura, “Classification-Reconstruction Learning for Open-Set Recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 4016–4025.
  28. P. Oza, and V. M. Patel, “C2ae: Class conditioned auto-encoder for open-set recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 2307–2316.
  29. X. Sun, Z. Yang, C. Zhang, K. V. Ling, and G. Peng, “Conditional gaussian distribution learning for open set recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 13480–13489.
  30. J. Wang, O. Alamayreh, B. Tondi, and M. Barni, “Open Set Classification of GAN-based Image Manipulations via a ViT-based Hybrid Architecture,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 953–962.
  31. C. Kim, Y. Ren, and Y. Yang, “Decentralized attribution of generative models,” in InInternational Conference on Learning Representations, Oct., 2020.
  32. N. Yu, V. Skripniuk, D. Chen, L. Davis, M. Fritz, “Responsible disclosure of generative models using scalable fingerprinting,” 2020, arXiv:2012.08726.
  33. T. Yang, J. Cao, Q. Sheng, J. Ji, X. Li, and S. Tang, “Learning to disentangle gan fingerprint for fake image attribution,” 2021, arXiv:2106.08749v1.
  34. J. Frank, T. Eisenhofer, L. Schönherr, A. Fischer, D. Kolossa, and T. Holz, “Leveraging frequency analysis for deep fake image recognition,” in International conference on machine learning, pp. 3247–3258, 2020.
  35. S. Girish, S. Suri, S. S. Rambhatla, and A. Shrivastava, “Towards discovery and attribution of open-world gan generated images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 14094–14103.
  36. Z. Sun, S. Chen, T. Yao, B. Yin, R. Yi, S. Ding, and L. Ma, “Contrastive Pseudo Learning for Open-World DeepFake Attribution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 20882–20892.
  37. H. M. Yang, X. Y. Zhang, F. Yin, and C. L. Liu, “Progressive Open Space Expansion for Open-Set Model Attribution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2023, pp. 15856–15865.
  38. S. Fang, T. D. Nguyen, and M. C. Stamm, “Open Set Synthetic Image Source Attribution,” 2023, arXiv:2308.11557.
  39. T. Yang, J. Cao, D. Wang, and C. Xu, “Fingerprints of Generative Models in the Frequency Domain,” 2023, arXiv:2307.15977.
  40. L. Abady, J. Wang, B. Tondi, and M. Barni, “A Siamese-based verification system for open-set architecture attribution of synthetic images,” Pattern Recognition Letter, vol. 180, 2024.
  41. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in InInternational Conference on Learning Representations, Feb. 2018.
  42. A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,” in International Conference on Learning Representations, 2018.
  43. D. Berthelot, T. Schumm, and L. Metz , “Began: Boundary equilibrium generative adversarial networks,” 2017, arXiv:1703.10717.
  44. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” in International Conference on Learning Representations, 2018.
  45. Y. Choi, Y. Uh, J. Yoo, and J. W. Ha , “Stargan v2: Diverse image synthesis for multiple domains,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 8188–8197.
  46. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 10684–10695.
  47. A. Vahdat, K. Kreis, and J. Kautz, “Score-based generative modeling in latent space,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 11287–11302.
  48. P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 12873–12883.
  49. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 4401–4410.
  50. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2015, pp. 3730–3738.
  51. T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training Generative Adversarial Networks with Limited Data,” in Advances in neural information processing systems, vol. 33, 2020, pp. 12104–12114.
  52. D. Busbridge, J. Ramapuram, P. Ablin, T., Likhomanenko, E. G., Dhekane, X., Suau Cuadros, and R. Webb, “How to scale your ema,” in Advances in Neural Information Processing Systems, vol. 36, 2024.
  53. Y. Shen, J. Gu, X. Tang, and B. Zhou , “Interpreting the latent space of gans for semantic face editing,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 9243–9252.
  54. O Patashnik, Z Wu, E Shechtman, D Cohen-Or, D Lischinskia, “Styleclip: Text-driven manipulation of stylegan imagery,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 2085–2094.
  55. D. Roich, R. Mokady, A. H. Bermano, and D. Cohen-Or, “Pivotal tuning for latent-based editing of real images,” ACM Transactions on graphics (TOG), 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jun Wang (990 papers)
  2. Benedetta Tondi (43 papers)
  3. Mauro Barni (56 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com