DLP-GAN: learning to draw modern Chinese landscape photos with generative adversarial network (2403.03456v2)
Abstract: Chinese landscape painting has a unique and artistic style, and its drawing technique is highly abstract in both the use of color and the realistic representation of objects. Previous methods focus on transferring from modern photos to ancient ink paintings. However, little attention has been paid to translating landscape paintings into modern photos. To solve such problems, in this paper, we (1) propose DLP-GAN (Draw Modern Chinese Landscape Photos with Generative Adversarial Network), an unsupervised cross-domain image translation framework with a novel asymmetric cycle mapping, and (2) introduce a generator based on a dense-fusion module to match different translation directions. Moreover, a dual-consistency loss is proposed to balance the realism and abstraction of model painting. In this way, our model can draw landscape photos and sketches in the modern sense. Finally, based on our collection of modern landscape and sketch datasets, we compare the images generated by our model with other benchmarks. Extensive experiments including user studies show that our model outperforms state-of-the-art methods.
- Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.-H.: Universal style transfer via feature transforms. Advances in neural information processing systems 30 (2017) (3) Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016). https://doi.org/10.1109/cvpr.2016.265 (4) Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016). https://doi.org/10.1109/cvpr.2016.265 (4) Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016). https://doi.org/10.1109/cvpr.2016.265 (4) Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711 (2016). https://doi.org/10.1007/978-3-319-46475-6_43. Springer (5) Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/iccv.2017.244 (6) Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. Advances in neural information processing systems 30 (2017) (7) Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017). https://doi.org/10.1109/cvpr.2017.632 (8) Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Li, R., Wu, C.-H., Liu, S., Wang, J., Wang, G., Liu, G., Zeng, B.: Sdp-gan: saliency detail preservation generative adversarial networks for high perceptual quality style transfer. IEEE Transactions on Image Processing 30, 374–385 (2020). https://doi.org/10.1109/TIP.2020.3036754 (9) Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Lin, T., Ma, Z., Li, F., He, D., Li, X., Ding, E., Wang, N., Li, J., Gao, X.: Drafting and revision: Laplacian pyramid network for fast high-quality artistic style transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5141–5150 (2021). https://doi.org/10.1109/cvpr46437.2021.00510 (10) Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., Ding, E.: Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021). https://doi.org/10.1109/iccv48922.2021.00658 (11) Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Peng, X., Peng, S., Hu, Q., Peng, J., Wang, J., Liu, X., Fan, J.: Contour-enhanced cyclegan framework for style transfer from scenery photos to chinese landscape paintings. Neural Computing and Applications, 1–22 (2022). https://doi.org/10.1007/s00521-022-07432-w (12) Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zheng, C., Zhang, Y.: Two-stage color ink painting style transfer via convolution neural network. In: 2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN), pp. 193–200 (2018). https://doi.org/10.1109/i-span.2018.00039. IEEE (13) Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zhou, L., Wang, Q.-F., Huang, K., Lo, C.-H.: An interactive and generative approach for chinese shanshui painting document. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 819–824 (2019). https://doi.org/10.1109/icdar.2019.00136. IEEE (14) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Communications of the ACM 63(11), 139–144 (2020). https://doi.org/10.1145/3422622 (15) Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Bharti, V., Biswas, B., Shukla, K.K.: Emocgan: a novel evolutionary multiobjective cyclic generative adversarial network and its application to unpaired image translation. Neural Computing and Applications 34(24), 21433–21447 (2022). https://doi.org/10.1007/s00521-021-05975-y (16) He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- He, B., Gao, F., Ma, D., Shi, B., Duan, L.-Y.: Chipgan: A generative adversarial network for chinese ink wash painting style transfer. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1172–1180 (2018). https://doi.org/10.1145/3240508.3240655 (17) Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Wang, W., Li, Y., Ye, H., Ye, F., Xu, X.: Ink painting style transfer using asymmetric cycle-consistent gan. Available at SSRN 4109972 (2022). https://doi.org/10.2139/ssrn.4109972 (18) Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Li, B., Xiong, C., Wu, T., Zhou, Y., Zhang, L., Chu, R.: Neural abstract style transfer for chinese traditional painting. In: Asian Conference on Computer Vision, pp. 212–227 (2018). https://doi.org/10.1007/978-3-030-20890-5_14. Springer (19) Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Qiao, T., Zhang, W., Zhang, M., Ma, Z., Xu, D.: Ancient painting to natural image: A new solution for painting processing. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 521–530 (2019). https://doi.org/10.1109/wacv.2019.00061 (20) Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Qin, S., Liu, S.: Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Computing and Applications 34(24), 21551–21566 (2022). https://doi.org/10.1007/s00521-021-06147-8 (21) Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Sun, H., Wu, L., Li, X., Meng, X.: Style-woven attention network for zero-shot ink wash painting style transfer. In: Proceedings of the 2022 International Conference on Multimedia Retrieval, pp. 277–285 (2022). https://doi.org/10.1145/3512527.3531391 (22) Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Li, J., Wang, Q., Li, S., Zhong, Q., Zhou, Q.: Immersive traditional chinese portrait painting: Research on style transfer and face replacement. In: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), pp. 192–203 (2021). https://doi.org/10.1007/978-3-030-88007-1_16. Springer (23) Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Xue, A.: End-to-end chinese landscape painting creation using generative adversarial networks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3863–3871 (2021). https://doi.org/10.1109/wacv48630.2021.00391 (24) Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems 34, 8780–8794 (2021) (25) Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33, 6840–6851 (2020) (26) Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., Norouzi, M.: Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022). https://doi.org/10.1145/3528233.3530757 (27) Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Su, X., Song, J., Meng, C., Ermon, S.: Dual diffusion implicit bridges for image-to-image translation. arXiv preprint arXiv:2203.08382 (2022). https://doi.org/10.48550/arXiv.2203.08382 (28) Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Li, B., Xue, K., Liu, B., Lai, Y.-K.: Bbdm: Image-to-image translation with brownian bridge diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961 (2023) (29) Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Li, H., Wu, X.-J.: Densefuse: A fusion approach to infrared and visible images. IEEE Transactions on Image Processing 28(5), 2614–2623 (2018). https://doi.org/10.1109/tip.2018.2887342 (30) Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018). https://doi.org/10.1109/cvpr.2018.00917 (31) Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189 (2018). https://doi.org/10.1007/978-3-030-01219-9_11 (32) Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zhang, F., Gao, H., Lai, Y.: Detail-preserving cyclegan-adain framework for image-to-ink painting translation. IEEE Access 8, 132002–132011 (2020). https://doi.org/10.1109/access.2020.3009470 (33) Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Chung, C.-Y., Huang, S.-H.: Interactively transforming chinese ink paintings into realistic images using a border enhance generative adversarial network. Multimedia Tools and Applications, 1–34 (2022). https://doi.org/10.1007/s11042-022-13684-4 (34) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90 (35) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017). https://doi.org/10.1109/cvpr.2017.243 (36) Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017). https://doi.org/10.1109/iccv.2017.304 (37) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). https://doi.org/10.48550/arXiv.1409.1556 (38) Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Poma, X.S., Riba, E., Sappa, A.: Dense extreme inception network: Towards a robust cnn model for edge detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1923–1932 (2020). https://doi.org/10.1109/wacv45572.2020.9093290 (39) Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018). https://doi.org/10.1109/cvpr.2018.00068 (40) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L.: Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019) (41) Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://doi.org/10.48550/arXiv.1412.6980 (42) Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167 (43) Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Dou, H., Chen, C., Hu, X., Jia, L., Peng, S.: Asymmetric cyclegan for image-to-image translations with uneven complexities. Neurocomputing 415, 114–122 (2020). https://doi.org/10.1016/j.neucom.2020.07.044 (44) Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Peng, Z., Wang, H., Weng, Y., Yang, Y., Shao, T.: Unsupervised image translation with distributional semantics awareness. Computational Visual Media 9(3), 619–631 (2023). https://doi.org/10.1007/s41095-022-0295-3 (45) Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Advances in neural information processing systems 30 (2017) (46) Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Tang, H., Liu, H., Xu, D., Torr, P.H., Sebe, N.: Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems (2021). https://doi.org/10.1109/TNNLS.2021.3105725 (47) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30 (2017) (48) Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Bińkowski, M., Sutherland, D.J., Arbel, M., Gretton, A.: Demystifying mmd gans. arXiv preprint arXiv:1801.01401 (2018). https://doi.org/10.48550/arXiv.1801.01401 (49) Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369 (2010). https://doi.org/10.1109/icpr.2010.579. IEEE
- Xiangquan Gui (1 paper)
- Binxuan Zhang (1 paper)
- Li Li (657 papers)
- Yi Yang (856 papers)