Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Detecting AI-Generated Images via CLIP (2404.08788v1)

Published 12 Apr 2024 in cs.CV and cs.LG

Abstract: As AI-generated image (AIGI) methods become more powerful and accessible, it has become a critical task to determine if an image is real or AI-generated. Because AIGI lack the signatures of photographs and have their own unique patterns, new models are needed to determine if an image is AI-generated. In this paper, we investigate the ability of the Contrastive Language-Image Pre-training (CLIP) architecture, pre-trained on massive internet-scale data sets, to perform this differentiation. We fine-tune CLIP on real images and AIGI from several generative models, enabling CLIP to determine if an image is AI-generated and, if so, determine what generation method was used to create it. We show that the fine-tuned CLIP architecture is able to differentiate AIGI as well or better than models whose architecture is specifically designed to detect AIGI. Our method will significantly increase access to AIGI-detecting tools and reduce the negative effects of AIGI on society, as our CLIP fine-tuning procedures require no architecture changes from publicly available model repositories and consume significantly less GPU resources than other AIGI detection models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Saxena, D., Cao, J.: Generative adversarial networks (gans): Challenges, solutions, and future directions. ACM Comput. Surv. 54(3) (2021) https://doi.org/10.1145/3446374 [3] Midjourney: Subscription Plans. docs.midjourney.com/docs/plan Accessed 09-21-23 OpenAI [2022] OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Midjourney: Subscription Plans. docs.midjourney.com/docs/plan Accessed 09-21-23 OpenAI [2022] OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  2. Midjourney: Subscription Plans. docs.midjourney.com/docs/plan Accessed 09-21-23 OpenAI [2022] OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  3. OpenAI: DALL-E Now Available in Beta. openai.com/blog/dall-e-now-available-in-beta Accessed 09-21-23 [5] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  4. Greenburger, A.: Artist wins photography contest after submitting ai-generated image, then forfeits prize. ARTnews. Accessed 2023-10-19 [6] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  5. Small, Z.: As fight over a.i. artwork unfolds, judge rejects copyright claim. The New York Times. Accessed 2023-09-21 [7] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  6. Vincent, J.: Ai art tools stable diffusion and midjourney targeted with copyright lawsuit. The Verge. Accessed 2023-09-21 [8] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  7. Slotkin, J.: ’monkey selfie’ lawsuit ends with settlement between peta, photographer. National Public Radio. Accessed 2023-09-21 [9] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  8. United States Department of Homeland Security: Increasing Threat of Deepfake Identities. dhs.gov/sites/default/files/publications/increasing _threats _of _deepfake _identities _0.pdf Accessed 10-19-23 Liu et al. [2023] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  9. Liu, Y., Deng, G., Xu, Z., Li, Y., Zheng, Y., Zhang, Y., Zhao, L., Zhang, T., Liu, Y.: Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.13860 arXiv:2305.13860 [cs.SE] Carlini et al. [2023] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  10. Carlini, N., Jagielski, M., Choquette-Choo, C.A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., Tramèr, F.: Poisoning Web-Scale Training Datasets is Practical. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2302.10149 arXiv:2302.10149 [cs.CR] [12] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  11. Vincent, J.: Twitter taught microsoft’s ai chatbot to be a racist asshole in less than a day. The Verge. Accessed 2023-09-22 Shumailov et al. [2023] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  12. Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2305.17493 arXiv:2305.17493 [cs.LG] Chen et al. [2021] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  13. Chen, X., Dong, C., Ji, J., Cao, J., Li, X.: Image manipulation detection by multi-view multi-scale supervision, 14165–14173 (2021) https://doi.org/10.1109/ICCV48922.2021.01392 Athanasiadou et al. [2018] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  14. Athanasiadou, E., Geradts, Z., Van Eijk, E.: Camera recognition with deep learning. Forensic Sciences Research (2018) https://doi.org/10.1080/20961790.2018.1485198 Kwon et al. [2022] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  15. Kwon, M.-J., Nam, S.-H., Yu, I.-J., Lee, H.-K., Kim, C.: Learning jpeg compression artifacts for image manipulation detection and localization. International Journal of Computer Vision 130 (2022) https://doi.org/10.1007/s11263-022-01617-5 Radford et al. [2021] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  16. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning (2021). https://api.semanticscholar.org/CorpusID:231591445 Wang et al. [2020] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  17. Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: Cnn-generated images are surprisingly easy to spot… for now. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8692–8701 (2020). https://doi.org/10.1109/CVPR42600.2020.00872 Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  18. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints, 2210–14571 (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Gragnaniello et al. [2021] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  19. Gragnaniello, D., Cozzolino, D., Marra, F., Poggi, G., Verdoliva, L.: Are gan generated images easy to detect? a critical analysis of the state-of-the-art. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428429 Wang et al. [2023] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  20. Wang, Z., Bao, J., Zhou, W., Wang, W., Hu, H., Chen, H., Li, H.: DIRE for Diffusion-Generated Image Detection. arXiv e-prints (2023) https://doi.org/10.48550/arXiv.2303.09295 arXiv:2303.09295 [cs.CV] Dhariwal and Nichol [2021] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  21. Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf Ricker et al. [2022] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  22. Ricker, J., Damm, S., Holz, T., Fischer, A.: Towards the Detection of Diffusion Model Deepfakes. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2210.14571 arXiv:2210.14571 [cs.CV] Yu et al. [2015] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  23. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv e-prints (2015) https://doi.org/10.48550/arXiv.1506.03365 arXiv:1506.03365 [cs.CV] [25] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  24. OpenAI: CLIP. github.com/openai/CLIP Accessed 09-26-23 Ho et al. [2020] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  25. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc., ??? (2020). https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf Liu et al. [2022] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  26. Liu, L., Ren, Y., Lin, Z., Zhao, Z.: Pseudo Numerical Methods for Diffusion Models on Manifolds. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2202.09778 arXiv:2202.09778 [cs.CV] Nichol and Dhariwal [2021] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  27. Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR, ??? (2021). https://proceedings.mlr.press/v139/nichol21a.html Rombach et al. [2022] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  28. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10674–10685. IEEE Computer Society, Los Alamitos, CA, USA (2022). https://doi.org/10.1109/CVPR52688.2022.01042 . https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01042 Sauer et al. [2021] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  29. Sauer, A., Chitta, K., Müller, J., Geiger, A.: Projected gans converge faster. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17480–17492. Curran Associates, Inc., ??? (2021). https://proceedings.neurips.cc/paper _files/paper/2021/file/9219adc5c42107c4911e249155320648-Paper.pdf Karras et al. [2019] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  30. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Karras et al. [2017] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  31. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv e-prints (2017) https://doi.org/10.48550/arXiv.1710.10196 arXiv:1710.10196 [cs.NE] Wang et al. [2022] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG] Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
  32. Wang, Z., Zheng, H., He, P., Chen, W., Zhou, M.: Diffusion-GAN: Training GANs with Diffusion. arXiv e-prints (2022) https://doi.org/10.48550/arXiv.2206.02262 arXiv:2206.02262 [cs.LG]
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. A. G. Moskowitz (3 papers)
  2. T. Gaona (1 paper)
  3. J. Peterson (78 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com